The INCLUDE statement does not increase compile time; it speeds it up. INCLUDE does not make spaghetti code; it keeps code more organized.
...Having the same/slightly modified/updated support function copied in several files makes for a huge mess; one steps on the others toes or undoes something needed by this version. Support functions get copied and pasted and get updated but no one goes back through all their files and updates all of the function's instances.
Four points in response:
(1) I'm assuming you are talking about the
C programming language, or perhaps even the
C++ language. NOT the
C# language, which has an entirely other way of going about such includes. If I'm wrong in this, then disregard everything I've posted about it - we're talking about apples and pairs if so.
(2) As I understand the "spaghetti" code: it's that you end up with lots of inter-dependent source files which fail if only one of them is missing. Or even worse, when some function in one is overridden in another to do something else. This is exactly the same thing which occurs in C's include - see below.
(3) A C include statement is nothing else than a "copy that file into this spot before compiling". This means that if you #include FileA into FileB, then it's as if you opened FileA, selected all, Ctrl+C, swapped to FileB and Ctrl+V. If you don't believe me, see the answers from experts:
http://stackoverflow.com/questions/5735379/what-does-include-actually-dohttp://www.cplusplus.com/forum/articles/10627/http://stackoverflow.com/questions/1539347/what-are-ways-of-improving-build-compile-timehttp://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka4022.html(4) So when you have FileA "included" into more than just one single other file, the compile-time will get increased because it will literally compile the same code more than once (note literal meaning of the word literally). This gets worse when you're including a file which includes other files on its own. You can very easily have a source file only a few lines long, including one or two others, which then expands through the pre-processor into 1000's of lines of source code for the compiler to handle. And it's worse in C than in Lisp's last-loaded-wins approach, because the compiler may compile any file in any order, especially as it may be set to compile using multiple threads - so you cannot even say for sure which "version" of a function will be the final one which is in the compiled result.
There's actually a way to try and avoid this issue in C. It's called "include guards":
http://www.cplusplus.com/forum/general/71787/A similar approach in Lisp would be to check if a function is already defined before loading it, e.g.:
This does the same as a C construct like this: The file to be included needs to define a pre-processor definition to enable other files to check
#ifndef _MY_FILE_ //Only define if it's not already defined
#define _MY_FILE_ //Define the pre-processor directive
#endif // End of if block
// The rest of the code inside the MyFile.C/H source
Then in the other files you do the following:
#ifndef _MY_FILE_ // Check if the file's not already included somewhere else
#include "MyFile.C/H" // Copy the file into this spot
#endif
// Rest of code for this other file
To me the Lisp way is a lot less convoluted, it's a simple one-liner (strictly speaking 2 lines, if and load), you don't need to edit the file to be loaded, and all you need to know about it is one of its defun names (which you should already since you want to use them). Even if you try to do the exact same in C it gets very hairy:
http://stackoverflow.com/questions/6916772/can-i-re-define-a-function-or-check-if-it-exists