Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Use Make? (2013) (ocks.org)
164 points by jeffreyrogers on March 29, 2014 | hide | past | favorite | 103 comments


Make is an absolutely wonderful, wonderful tool.

Most of the common criticisms of Make are actually criticisms of autoconf, which I agree is a hideous tool. (On the other hand, autoconf is a tool intended to address a hideous problem, so perhaps that's inevitable).

A lot of the more recent build tools I see are largely reinventions of Make. Make is very widely supported (most basic projects can make do[0] with portable Makefiles, though GNU Make is also available for most systems), and its syntax is actually very easy to grasp and manipulate[1].

[0] no pun intended

[1] Most projects only need a very small subset of what Make has to offer, anyway, and that can be learned in a matter of minutes.


Make is a great tool for the intended uses. Unfortunately, many uses of Make-like tools also require the unintended uses; my favorite example is the auto-dependencies, which is required by most compilation tasks and still tedious to get it right [1]. It also (mostly) lacks modern programmable interfaces, which led to Makefiles laden with hacks and mumbo-jumbos. At least for the compilation tasks, I see Make is clearly being outdated.

[1] Something like http://mad-scientist.net/make/autodep.html


Honestly I don't get why auto dependencies are even needed. IMO splitting up manually managed dependency rules into multiple, manually ordered makefiles and importing them in a master makefile, should work just fine. It is a lot of work to switch to from a managed environment, but if you start with this from the beginning it's not a big deal adding dependencies whenever they come up. As a developer I like having a nicely formatted dependency documentation, including comments, in one place instead of having to look at what autoconf & co are spitting out.


So every time I add an import or #include, I need to update the dep graph manually? That may work for larger teams, but that would seriously hurt productivity for smaller projects.

I think the main issue with approaches like yours is that there's no way to verify correctness, so it's easy to introduce bugs.


- small projects should also lead to small dependency graphs, so I don't really get why it's bad there.

- I'm just not a big fan of automating the configuration of a build system - you build/put yet another machine around a machine. As long as it works - ok, but as soon as you run into unexpected behavior it will become a huge time sink to debug. When you update the Makefile contents manually you take ownership over everything and you get a deeper understanding on how everything works together, right down to the command parameters sent to compiler and linker. If you don't want to do that I'd suggest switching to something like cmake - if it has to 'magic', at least have the behavior specified in a coherent manner.


How does this lead to having dependency information in one place? It puts it in at least two. The file and the makefile.

Not at all sure what autoconf has to do with anything, autodeps are a completely separate problem space.


What I meant was one place to look for dependencies for the whole project and/or one of its modules - if it's only documented in the source I need to analyze all the source files instead. But you're right that the information is duplicated - a price I'm willing to pay here. And yes, I meant auto dependencies, not autoconf, sorry about that.


The auto-dependency problem is a perfect example of how there is room for improvement in make-land. When something that basically everyone wants to do takes a long web page to describe a complicated method that sort of works, that is a compelling indicator that there must be a better way.

Incidentally, I am working on a make-replacement that is intended to address this and similar needs. It's still in very early stages, but I think I'm onto something.

The idea is to write a very low-level tool that leaves out most of Make's higher-level abstractions. In my tool there are no recipes, no implicit rules, no variables or variable substitutions, no conditionals, etc. The input to my tool is just the precise specifications of the commands you need to run, their inputs and outputs (so a precise dependency graph can be calculated), and with everything fully-expanded already.

The idea, then, is that whatever higher-level abstractions you want (if any) you build into a higher-level tool. The higher-level tool just spits out a file describing the list of tasks. Then the higher-level tool can worry about the higher-level structure, policy, configuration, etc. of your project. So instead of writing things like implicit rules in Makefiles, you just write some code that explicitly generates tasks.

For example, with Make, you might write an implicit rule like this:

     %.o : %.c
             $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@
Then Make magically decides which output files match this implicit rule. My idea is that, instead of this, explicitly apply your rules to your inputs. Your build system could instead be a Ruby/Python/etc. script that looks something like this:

    for c_file, o_file in files:
      tasks.append(Task(
        target=o_file,
        source=c_file,
        command="gcc -c %s -o %s", c_file, o_file
      ))

    print(tasks)
I think this is much more convenient than having to program in Make and learn its quirky abstractions.

I have an elegant solution for the auto-dependency problem (unproven, but I think it's promising). The idea is that your dependency-calculating tasks are still just tasks, but the tool knows how to integrate the calculated dependencies back into the overall dependency graph. These dependency-generating tasks depend on the files they are generating dependencies for, so it is all part of the unified dependency graph.

If you're interested, star my project and follow my blog (where I will make any announcements about it):

https://github.com/haberman/taskforce

http://blog.reverberate.org/


You should check out djb's redo [1], specifically the implementation by apenwarr [2]. It's close to the separation of concerns you're looking for: programs written in your language of choice enumerate the dependency graph, an external tool collects them and executes its way through the dependency graph towards your targets.

I agree with you that Make's pattern matching facilities are ugly and could be improved upon. I tried something of the sort [3] with redo, and it worked well, but felt too alien.

From what I've seen at big companies with huge builds, the "dependency graph itself changed, now what do we rebuild" problem is the killer problem. I bet you've seen instances of that at Google.

[1] http://cr.yp.to/redo.html

[2] https://github.com/apenwarr/redo

[3] https://news.ycombinator.com/item?id=7464047


See also remake. It's a bit simpler, I think.

https://github.com/rocky/remake


No mention of tup?

You can do something like the make variant with:

  : foreach *.c |> gcc -c %f -o %o |> %B.o
Edit: Note that this example is different from what Make will do. With Make you're saying "whenever you need to satisfy a * .o dependency go compile a .c". With Tup, it is "find all * .c and make .os from each of them". In fact, a complete Tupfile that just compiles all .c files it finds and links the resulting .o files is simply (copied straight from the examples):

  : foreach *.c |> gcc -Wall -c %f -o %o |> %B.o
  : *.o |> gcc %f -o %o |> hello
Or if you want to output files named in some other way and not as basename.o, you can write a script that generates those rules and include it inline:

  run ./myscript.py
Or you can write a Lua script that uses the Tup Lua API to define rules.


I'm using tup right now (lightly) and rather enjoying it.


Are you familiar with Ninja? (which is absolutely excellent IMHO) http://martine.github.io/ninja/

> Ninja is a small build system with a focus on speed. It differs from other build systems in two major respects: it is designed to have its input files generated by a higher-level build system, and it is designed to run builds as fast as possible.

Obligatory: https://xkcd.com/927/


I recently played a bit with ninja on a tiny c project -- as I was already using cmake, the transition was seamless, and even for such a very tiny project the speedup was tangible (but absolutely not relevant in any way, everything was building fast enough :).

Also played a bit with tup, and it's quite nice too.


Concur, what haberman describes really sounds like ninja.


TL;DR: I think Ninja is a step in the right direction, but I want to take it further.

Yes, I came across Ninja when searching for prior art. While it's "philosophy" essays make it sound like it is exactly what I'm looking for, when I dug in deeper I found that it was still a ways away from what I had in mind.

When you look at Ninja files, you find some of the things that I explicitly want to get rid of: variables and rules, evaluation and scoping. Their solution for C/C++ dependencies is to have special magic in the tool.

Maybe the biggest difference between Ninja and what I am envisioning is that, although pared down and low-level, Ninja is still a command-line tool implemented as a language (.ninja files). What I have in mind is an embeddable library that deals only in flat, fully-expanded lists of tasks. At its core it is much more like an API than a tool (though wrapping it in a convenient command-line tool will of course be an important use case).

Think about it this way. There are so many directions you might want to extend your build system:

1. you might want to run your jobs in a different way, for example by distributing them to a cluster of build workers.

2. you might want to run your builder as a daemon that watches the filesystem and receives notifications of changed files, instead of having to figure everything out from scratch every time.

3. You might have custom, per-language logic for calculating dynamic dependencies.

4. You might want to fingerprint your inputs and outputs and cache artifacts so that you can return cached results instead of doing redundant work (especially useful if you have a build cluster that many people share).

5. You might want to do strongly isolated and hermetic builds that run each command against a directory tree that only contains its supposed inputs.

6. you might want to closely integrate the build system with your IDE, so the IDE can show deep and up-to-date information about the progress of the build.

7. I think of Light Table-like environments as being the future of IDEs. Part of the philosophy of Light Table is that you can program your own custom editor interactions easily ("The future is specific": http://www.chris-granger.com/2012/05/21/the-future-is-specif...). Programming a specific editing mode would often involve calculating artifacts derived from your source files to show in a different editor pane (for example, the assembly corresponding to your C, or a graphical representation of your source). To do this, you want something like "make", but programmable so you can generate and remove rules on the fly.

With a Ninja-like approach, you would have to add these features to Ninja itself. But once you do this, Ninja is going to get even less small and simple.

This is why I think the way forward is to have an API-based, embeddable builder engine. The core just knows how to take fully-expanded task specifications and evaluate the dependency graph to decide what jobs need to run. Then the logic for how to run jobs, record their results, possibly return cached results etc. can be easily mixed and matched. For example:

1. I would ship with a very simple command-line tool that acts mostly like "make" or "ninja": you run it, it looks at the filesystem to decide what to do, and then does it. When the build is finished the tool exits.

2. I would likely also ship with a daemon version of the same that watches the filesystem with inotify/dnotify and automatically keeps outputs up to date when inputs change. The daemon would likely support some kind of service API that you could use to programmatically query the list of tasks and jobs, and probably also an HTTP/HTML UI.

3. Anyone who wanted to integrate with a distributed builder service could embed the library but write completely custom code for actually executing the jobs.

But here is the key thing: the Tasks that a project writes can stay the same no matter what fancy stuff gets built on top. Whether you are doing the dead-simplest local build from scratch or whether you have a long-lived daemon that distributes your build and heavily caches your artifacts, there is still a common data model for tasks and jobs, and still a common dependency-evaluating core that can be shared no matter where the library is embedded.


"Your build system could instead be a Ruby/Python/etc. script"

So, to compile a slightly complicated project, I would need Ruby for the main project, Python for a library it builds on, Haskell for another library it builds on, etc?

In theory, that's not a new problem, but I think that, by removing the common language that make provides, it will become much more common in practice.

At the extreme, it would only be natural, but disastrous, for the guys writing language X to use language X to write their make file. Result: you need to bootstrap language X through a zillion versions to port it to a new system, or mess around trying to build its parts by hand.


Yeah it might be better to standardize on a common language (I would probably choose Lua) for building the tasks, both for the reason you cite and to make it easier to share and reuse shared functionality. But it would just be a convention -- You could still build it in any language or integrate it with anything without having Lua forced on you.


This reminds me (pretty obviously I guess) of the split in saltstack (Python based Chef/puppet killer). Build highly specific tasks and then have policy decide which ones to call in which order.

Whilst salt is not aiming at Make-for-DevOps it certainly is something that is in the realm of possible.

The gap between compiling a bunch of source code that becomes an executable and a bunch of source code that becomes twenty servers running fifty apps is very very small.


You're basically describing WAF. Even your sketched up syntax is very similar to what it does. Here is how you run candle.exe which is a part of WiX using a one-off rule:

        return ctx(
            rule = 'candle.exe -nologo -out ${TGT} ${SRC} -dMySource="%s"' % dirpath,
            source = ['%s.wxs' % group],
            target = ['%s.wxsobj' % group]
        )
It has canned dependency node generators in libraries for most popular languages:

    ctx.objects(source = glob.glob('src/*.cpp'), target = 'OBJS')
Here you don't specify the names of the target object files directly and instead let WAF's c++ tool figure them out. On Windows, they will be .obj, on Linux .o and maybe something else on Mac.


> I think this is much more convenient than having to program in Make and learn its quirky abstractions.

until you get to update a codebase were all the build scripts are the worst form of perl/forth/ksh/awk scripts all written by someone who though a new script on top of the old one would be easier


The verbosity of your suggested syntax is very off putting. Why not build on the Make paradigm and syntax but fix the bugs? The autodeps problem is the only real flaw IMHO. The pattern syntax doesn't bother me too much.


I did something similar with my python make-alike, pub[1], where you can have something like:

    def outname(f):
        return f[:-1] + "o"

    @file_rule("*.c", outname)
    def compile(f):
        #do the compiling here
Here's an example of file_rule in use: https://github.com/llimllib/Newsite/blob/master/pub.py#L27

[1]: https://github.com/llimllib/pub/


Hi Josh. You might be interested in a Reddit thread discussing a Python-based build tool 'Meson'.

http://www.reddit.com/r/programming/comments/21i19h/a_generi...

https://github.com/jpakkane/meson-mirror


The GNU Make manual pages are really great. I use them all the time when I need to be remembered of some of the syntax or features. https://www.gnu.org/software/make/manual/make.html


I was surprised when I learned `make` and how simple it mostly is. Unfortunately it eschews man-pages in favor of info-pages which were not as pleasant to learn how to use, just to learn how to use `make`.


Any time you want to groan about a tool's use of info, see if the BSDs have a similar tool. Most GNU tools simply implement an ancient UNIX tool so it's not like GNU has a monopoly on them. So in this case see FreeBSD make:

http://www.freebsd.org/cgi/man.cgi?query=make&sektion=1

which points you to this:

https://www.freebsd.org/doc/en/books/pmake/

But really, if you just hate using info(1), always remember that GNU also puts the pages on the web in HTML multichunks, HTML one big page, and PDF, so there's no reason to dodge a GNU tool just because you hate info(1).

https://www.gnu.org/software/make/manual/


Or put something like this into a shell script or function:

    info --subnodes "$@" 2>&1 | less
and info(1) output will look a lot like man(1) output.


Another option, if you're an Emacs user, is to read it in Emacs. Emacs's info reader is very good.


If someone's argument is that info is hard to learn, it's pretty unlikely they're also an Emacs user (the standalone info uses emacs keybindings by default).


pinfo is an easier to use viewer for info files (it tries to adopt a lynx-like style).


The info-pages are probably fine if you already know emacs. Otherwise they're not worth bothering with, as it's all in nicely-formatted web pages now.


For me the problem is the argument formatting and dependency chasing for clang-osx and msvc/intelcc-windows


I write my own automation/build tools in Python

Envoy pynotify and fnmatch pretty much do the heavy lifting.

I do it this way because

    Since I know Python no extra syntax knowledge is required

    Python is going to be readable to me in 6 months when I've 
    forgotten the syntax of whatever build tool I just used

    *All* my machines desktop and servers already have Python 
    installed
    
    *It's fast, merging 20-30 largish text files takes around 
    10-15milliseconds

    * It's very very flexible (it's Python)
I used make extensively when I was at Uni and quite frankly if I never have to touch it again I would be a happy bunny.

It's an incredibly powerful and clever tool hiding behind an interface (that second to Git) is the worst I've ever had to use.

EDIT: I've actually been toying with the idea of writing a python library to simplify things as lots of the code I end up writing is very similar across projects, it's been one of my "think about in shower" projects for quite a while.


> a python library to simplify things

That's basically scons. Give it a try, I've been using it for years to manage a fairly complex, unusual build process with minimal effort (less than 200 lines of Python code).

http://www.scons.org/


Thanks for the link :).

Scons is awesome but for what I use Python for (minification of web assets, automatically running image optimisations) it's taking a sledgehammer to a walnut.


Fair enough, though I've found scons is extremely flexible and lightweight, in the sense that it's not very insistent about imposing a certain style or process on you. Unlike the vast majority of other build tools.


Have you looked at webassets? [0]

[0]: https://github.com/miracle2k/webassets


I'm not saying that it is a 'cure all', but actually make -n lets you dry run, and test out your rules before you actually use them (with "touch"), and in most cases I'm sure that that is faster than both reading the manual, and writing a python library for replicating the functionality (reinvent the wheel).

Now, this approach depends of course how well you know make to begin with. But there are a lot of examples out there, and the man pages of GNU make are quite good. It is quite true to the old make facility, so even older book examples should work with it.


I'm not knocking make, it's a good piece of software.

The reason I roll my own in Python is that I get exactly the functionality I want working in exactly the way I want and quite often I can replace a binary dependency which I use a tiny part of with a python function.

I find that reduces my cognitive load as I'm only working with one thing instead of others.

ymmv.


Make is a very under-appreciated tool. I think it gets a bad rap because many people's only exposure to it is in large projects (where Make has some issues) or when coupled with autotools (which is rather ugly).

If that has been your only exposure to Make, you should take another look at it, and consider using it for your next small- or medium-sized project.


If you read down the comment threads here, you'll notice something. Many many folks get frustrated with some aspect of Make, and then turn around and say "I can do better myself in [language]". And then they go off and do 30% of Make in their favorite idiom. Sorry, but as an old fart, these all read as "Wah, make is haaard, and inconvenient".

This isn't to say that I think Make hung the moon in radiant perfection, but I haven't seen any tool that has a clearly superior set of compromises and tradeoffs. Gorgeous example: automatic dependency generation. Does anyone really think the identical dep generation codebase will work on 20 year old C and node? It's not hard to glom whatever dependency mapping you want into your make workflow and drop it into separate included makefiles.

It feels easier to write your own. But what that really leads to is re-learning all the shit the make maintainers have learned in the last (look it up.. DAMN.) nearly 40 years.

Or maybe _not_ learning it, and duplicating mistakes which have been solved for decades.


The reason for all these alternate build tools isn't that Make is hard. The problem is that it's too generic and low-level. It comes with a very limited set of built in rules, after that you're on your own.

For example: If I'm working on a C++ project then I'd like my build tool to be aware of concepts like header files, shared libraries, include paths, compiler options. For me that tool is CMake.

Can I do that in Make? I'm sure I could, but I'd end up with the same problem you're complaining about. I'd essentially be writing my own build system, just implemented in Make. Life is too short for that.


Yes: CMake is a little weird, but it's less bad than all the other options for cross-platform projects, and CMake+Ninja is wonderful. (that said, I do hope something Lua-based like Premake catches on).


I don't think what you describe is possible in make alone. You will have to generate make-files with your header dependencies with GCC -MMd option, you will later have to include these files from your main make file. As you say, its possible but it feels like you are building hacks upon hacks.

Add to that diamond shaped dependencies such as several h files including another generated h file and you're in for many sleepless nights.


The problems of make are in the scope of what it actually tries to do versus what we really want it to do. I explained a few weeks ago in another thread as to why all other replacements don't actually provide the solution.[https://news.ycombinator.com/item?id=7221515].

Almost every problem with makefiles is nothing to do with the tool itself - it's something to do with the environment, and the tool not being sufficiently intelligent enough (read: psychic) to figure stuff out. Most of the hacks built on top of make (eg, pkg-config) are attempts to fix some of these problems, but IMO, they're going about it from completely the wrong direction.

The Nix/Guix approach - declare your entire environment up front - will greatly simplify the requirements of any build system - there no longer any magic involved. You won't necessarily make the same mistakes, or even need to consider half of the problems Make has had to deal with over the years.

Make is still relevant and useful in combination with Guix/Nix, but it shouldn't need the hacks built on top of it, like pkg-config.


> The ugly side of Make is its syntax and complexity; the full manual is a whopping 183 pages. Fortunately, you can ignore most of this

I cannot take this seriously.

I like the concept of make but it doesn't make up for its own warts. Unfortunately, there is no single build tool I can blindly recommend to people without being extremely familiar with their project and how it builds. No solid and lightweight build tool that pleases more or less everyone without having 183 pages worth of manual and a repugnant syntax.

CMake/Lua gives me hope but we're not quite there yet. Make it pretty decent for small projects though... I see it as the HTTP of build tools, though. It has serious issues but when tools come up they are built on top of make because it's ubiquitous.


djb redo captures the concept of Make but is a lot simpler and no extra syntax (just uses shell)


POSIX make really does make auto dependencies hard, but GNU make actually solves this fairly cleanly by adding an include directive. As long as you have a script or program to process a file and spit out its dependencies in make style (e.g., GCC does this for C and C++ with its -M flags), you can just include the outputs of that process and make will do the rest.

For example, here's how I handle a simple C project.

    CSRC := [list .c files here]
    DEPS := $(CSRC:.c=.d)
    
    [standard statements for building go here]
    
    %.d: %.c
            gcc -MM -MF $@ $<
    
    -include $(DEPS)
Poof. Automatic dependency handling. You can see how any sort of dependency that can be detected by some sort of preprocessor can be plugged in here.


Some problems I have encountered with this approach:

1. dash before include. It will cause make to ignore any errors when generating the .d files. It can be pretty difficult to figure out what went wrong. Removing the dash will show a (harmless) error message for every .d file generated, which is pretty annoying.

2. At one time a.c includes b.h and you run make. a.d is created with "a.o: a.c b.h". If at this point you add to b.h an include to c.h, the new dependency of a.o on c.h will never be updated in a.d unless you manually delete it. This can cause some pretty nasty bugs! This can be solved by adding

    -MT $*.o -MT $*.d
to the gcc command line, which causes the dep file to regenerate when one of the dependencies changes (in fact the make manual suggests a similar solution using sed). However this creates another problem: if you remove a header a.h included by a.c (and the #include line to it), a.d still depends on a.h, so make will fail looking for it until you manually delete a.d (and any other dep file depending on the removed header).

tl;dr: this solution leaves much to be desired, and can be dangerous in some conditions.


You can even omit the extra dependency generation rule by using -MMD.

    CFLAGS+=-MMD
    -include $(DEPS)
That's it.


I completely agree with the sentiment of the article. Makefiles are a form of documentation. However, I no longer use make in new projects and instead use redux [1]: my implementation of djb redo. It manages the dependencies and you write your scripts in shell. Simple, straightforward and easy. No more 'make -B', no more make contortions.

[1] https://github.com/gyepisam/redux


I thought with make you also write scripts in shell.


Yes, you're right. However, in redo, you don't have a Makefile equivalent and, instead, specify the dependencies in the shell script that generates your target. Basically, the configuration language is shell (or any other program that can be invoked from a shell script).


This specific problem would be more simply solved with a script, listing the commands. Repeatable, testable and documents the process.

It doesn't require differentiating between identifically rendered tabs and spaces. It doesn't need dependency hand-holding, with explicit touch or checking Last-Modified. The simplest script just builds it all from scratch, like make clean.


If syntax is bad, why not build something with better syntax? Why we sticking with it?


I don't quite see the benefit of using make for arbitrary workflows (opposed to building software) over using plain old shell scripts.

But it sure isn't that much more complex nor requires a lot more of your system, so I'd say it's more a matter of preference.


Make has more natural built-in dependency management.

To write your workflow purely in shell scripts, you will invariably use lots of tests.

Also note that they are not necessarily mutually exclusive. You can write Makefiles that call shell scripts (and shell scripts that call make)


I use a ton of inline shell in make for data munging. I often put

SHELL=/bin/bash

at the top of the Makefile when I use bashisms to construct complex pipelines.

And once, in a fit of mad genius, I made a particularly complex and useful Makefile executable by putting #!/usr/bin/make -f at the top. (It was a bad idea, obviously, and I won't be doing that again.)

For large file based datasets, the shell is just a REPL.


I've found these things to make Make useful for shell scripting:

- straightforward and obvious support for multiple named entry points

- by default, process terminates when a command returns a non-0 exit code

- often-adequate (though still crappy, especially for paths with spaces) set of built-in string processing functions

- scripts often somewhat portable between Unix-style shells and Windows

(I suppose there may be Unix-style shells available for Windows, but I've never found a low-dependency one that actually works. So combined with the first three advantages I've stuck with make.)

I've actually found Make much better as a shell script replacement than it is for its stated purpose of building software :)


If you can express your workflow as a set of dependencies (granted, not all workflows are easily expressed this way), make gives you parallel and incremental computation "for free".

Imagine that you needed to download a tar archive, unpack it, then run several simulations followed by regressions followed by figure plotting on the data. You could write a shell script to do this, but it would be hard to make the shell script simulate the capabilities of `make -j', and you'd have to do a lot of timestamping and file existence checking to simulate the incremental computation capabilities of make.


The advantage of make, or any other decent build tool, is the dependency management; what depends on what, what's changed, what needs to be (re)built, what order to do all this. In the absence of all that, everything gets rebuilt all the time. Not a big deal for small projects but an absolute killer as projects get larger.


how to you track directory and file changes with shell scripts?


Use inotifywait


I never understood why make has to use tabs. Even if that had sense in 1977, why it remained so until now. Why not let any whitespace, even more spaces, have the same effect of starting the command line.


Make wouldnt be the first build tool i would try. I would go with something like gradle first. Eventhough, in Mike's case make looks like a good match for his requirements.


Gradle looks to me like it's full of good intentions, of exactly the sort that in a few years will have led them straight to hell, at which point someone will reinvent ant or make with different syntax and we'll be back where we started.

I worked on a proprietary build system remarkably similar to gradle for many years. The cleverer a build system tries to be the more likely it will become a self sustaining beast that consumes ever more of your time and destroys productivity. They look great with relatively simple systems, but a few years into production with multiple deployment target configurations and you will want to murder people. Build systems should be stupid, simple, and trivially predictable.


It's been going for five years so far, and has yet to get noticeably near the infernal realms.

I know Gradle fairly well, and i wouldn't describe it as particularly clever. The basic ideas in it are quite simple (and actually fairly similar to make!). Most of the complexity comes from specific build tasks for specific purposes, which doesn't complicate the core.


The key idea of gradle is to have a rule based system that does not exclude full scripting. Compared to gradle i dont know any other buildsystem that bridges these extrems well.


I often convert makefiles to shell script (bash) when I outgrow my knowledge of make. However, it's where I always start, and the makefile never goes away, just the complex pieces get rewritten in bash!


I am glad, that make exists. Guess what would happen, if every open source tool would use its own build environment. Open Source Software would be much more difficult to build from sources (sometimes, I can't do without).

So it is rather straight forward. Type make, sometimes autoconf and most of the time you are done.

Also in my own projects I prefer make to many other build tools. In software development, command line still rules (if you want to be really productive)!


I've used make as the basis for a system that works out the dependencies for nightly batch processing in an ERP system and coordinates the execution of the required processes across multiple applications that make up the system.

There's some shell script to glue the pieces together, and a Python script for invoking processes within the ERP applications, but make is the secret sauce that figures out what order to run everything in.


https://github.com/ndmitchell/shake Shake is an awesome alternative to Make written in Haskell. It fixes a lot of the weird dependency tracking issues Make has ("my build broke and I don't know why!" "did you make clean?")


Surprised that there is no single comment here about other options like ant and gradle. I personally hate anything where tab has a different meaning from 4 spaces, but maybe it's just me. Anyway, my personal choice for any kind of automation would not involve make if I can use the tools I mentioned.


I've been using ansible for complex build environment setup. Inline documentation. Can Factor in software dependencies or environmental setup as required.

It's working pretty well so far. It wasn't an intensional thing. I had a fairly complex stack for a particular project that I needed to share.


I was looking at Ansible in the context of this thread:

https://news.ycombinator.com/item?id=7487202

exactly because I like the idea of capturing the setup (compared to a more haphazard approach). My initial reaction was that for the computer setup use case, it did too much to hide away complexity.

I guess I don't have a deeper point, but I wonder if it is overkill for capturing simple workflow like is discussed here.


You might be interested in Drake, a kind of ‘make for data’.

http://blog.factual.com/introducing-drake-a-kind-of-make-for...


I used to do this until I found drake. Drake is the truth and the light. Use it.


[deleted]


Make has hardly any language-specific features or default rules. It's only purpose is to automate stuff you can already do manually at the command line. If you've got a script that obfuscates your source code, then you can call that from make.


Yes, it deserves a downvote, because it's incredibly lazy. If you read the article, or glanced at the Wikipedia page, you'd know "make" predates JavaScript by decades.


It most definitely does not.


Make uses disk I/O, which is really slow, and for complex projects become difficult to maintain. Also, why not use a proper programming language to define the build process? Well, a modern tool like Gulp[1] does that. It's streaming/asynchronous and scripts are written in JavaScript. I don't see why one would prefer Make over it.

1. https://github.com/gulpjs/gulp


There are many complex projects that use Make to build source code. Grab just about any software release tarball and there will almost certainly be a Makefile in it.

This is yet another example of the JavaScript community being completely ignorant of what came before them.


I'm not defending the parent topic particularly, but it's worth noting that you're not entirely correct.

If you download any complex project, there will almost certainly be a meta makefile generator (autoconf, cmake, etc) or a set of about 30 makefiles for different various specific situations (lua, etc).

...because make is terrible at doing complex tasks on its own, and terrible at code reuse on it's own.

To be fair; async I/O isn't a magical button that makes gulp better than anything else, but it's a bit off saying an entire community is ignorant. Don't be a dick.

The important points about gulp are:

    - Better syntax
    - Plugin system (code reuse)
    - Functional programming style
    - Faster than its main alternative, grunt
shrug I wouldn't use it to build c code. ...but then again, I'd never use make to build a c project either. Ninja is significantly faster for the same low level build process.


I wish I could use make to build my JavaScript code, but I need it to also work on windows (easily).

If there was a version of make I could point to that would work easily on windows, preferably as a non-global variable, I would use that, until then I probably will have to use something else, through likely just bash scripting with pipes (which does work in windows).


What do you mean when you say "as a non global variable"?

The one here:

http://gnuwin32.sourceforge.net/packages/make.htm

works fine if you do set path= and then call it using a full path into Program Files (I've just checked to make sure, but I used whatever version I had installed, not sure it's the latest one there).


I never claimed Make is not often used. My point was that there are better tools than Make available these days. If someone is starting a new project, it makes sense to check if there are alternatives to Makefiles.


And how exactly do you propose to compile a file without reading it?


Come on man... really... it's right there. javascript streaming asynchronous. I'm not sure how many more buzzwords you need. those earlier losers who built make, gcc, and other tools just didn't know what they were doing. And with newer tech, everyone who's not streaming asynchronous javascript is a dinosaur luddite.


I didn't propose that. The point is to read the file once, then do multiple steps on it, and write it out just once.


I feel like this whole thread is a bad joke.

Do you know of Unix pipes? The fact that gulp calls all of this "streams" and "pipes" is a seriously bizarre form of NIH.

You might also be interested in make -j option.


pipes and streams mean something slightly more specify in a node context, but they are actually compatible with Unix streams, which shockingly enough also work well on windows meaning you can string together a make-like build system which works on windows as well (easily, make is not easy on windows)


Not sure what you mean. I've developed Android apps on Windows 7 using Cygwin and make just fine. It's nearly identical to using it on Linux.


You still have to install Cygwin, other solutions require no .dlls


No .dlls, just rewriting everything in Javascript. Because that's so much less invasive than a dll to let you use existing mature solutions


If only the operating system had some facility to store a recently read file in RAM, so it didn't have to read it from the disk the next time. That would be a real innovation!

(sarcasm)


Page caching: it's been around for a while. At least on unix systems. I imagine Windows, etc have it too.


yeah,it's called piping and supported natively in make.


I stand corrected :).


I wish my compilations were merely I/O bound... everything would be so much faster then!


Well, actually they may be I/O bound.

http://aegis.sourceforge.net/auug97.pdf is old, but the problem it documents is still fairly common.


Ah, recursive make. I once spent some time trying to convert a large multiple-directory product build from many makefiles into one. Now that was painful. So many concurrency problems...


I might be wrong, but I think make does not work the way you think it does: First of all files you write to disk are not actually written to the harddrive immediately there are several caches in between, secondly if make has to process a file in multiple steps it will probably use unix pipes and won't write out intermediary files to disk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: