How many LOOP macros does the community need, particularly when bootstrapping an implementation, as an example.
Similarly with, arguably 70-80% of the runtime. The CL spec is dominated by the large library which, ideally, should be mostly portable CL, at least I would think.
LOOP is a great example, because all loop is just MIT LOOP version 829, originally cleaned up by burke. but nobody can resist deploying their personal architectural touch, so while the basic framework of loop remains identical across impelementations, there's superficial refactoring done by pretty much everyone. if you take SBCL and franz lisp as state of the art in free software and commercial respectivaly, they have equally solid improvements on original loop, that actually produce incompatible behavior in underspecified corners of spec. respective developer communities are very defensive about their incompatible behavior being the correct behavior of course. beach's SICL from sibling comment is the xkcd joke about standards "20 standards? we need a new standard the unifies them all! -- now we have 21 standards"
LOOP in this case is a very simple example, but for example CLOS was originally implemented on top of PCL, Portable CommonLoops, an interlisp system, that was massaged into being compliant CLOS over years. for example sbcl uses a ship of theseus PCL, but franz lisp did from scratch rewrite. the hypothetical portability of that layer is significantly trickier than LOOP since clos is is deeply tied to the type system, and the boundary between some hypothetical base system common lisp and its clos layer becomes complicated during system bootstrapping. but that's not all! of course clos has to be deeply tied to the compiler, the type system, all kinds of things, to provide optimizations. discovering the appropriate slicing boundary is difficult to say the least.
I'm unsure how complete it is, but it seems to cover much of the standard.
Savannah is very basic, perhaps too much, but it's okay for my project.
I abandoned that when I discovered there's no control. I seem to recall having to wait like over a week for someone to enable non-fast-forward pushes. Overly strict and understaffed. I opted for self hosting.
I kept the project web page there, though.
I think most free CL implementations have a stepper. Which ones do not?
CMU CL, SBCL, and LispWorks have steppers.
Clozure does not. (Edit: an answer on https://stackoverflow.com/questions/37754935/what-are-effici... suggests it does...)
As I understand it, those are the big 4.
Clisp, ABCL, and Allegro also appear to have steppers.
Always cool to see a new implementation, though!
Also, the compilers are allowed to make the code unsteppable in some cases, depending on optimization declaration: generally, debug needs to be >=2 and > speed/compilation-speed/space. In some circumstances, you land in decompiled/macroexpanded code, which is also quite unhelpful.
Anyway, it's not that source-level stepping isn't there at all, it's just quirky and somewhat inconvenient. A fresh implementation that does comparatively little optimization and is byte-code based can probably support debuggers better. I hope such support won't go away later when the native code compiler is implemented.
If I recall correctly, there are macros to control the level of code optimization? And some implementations can turn it off entirely for interactive use?
Or am I off-base?
Yup, you can either `(proclaim (optimize (debug 3) (speed 1)))` somewhere, which will take effect globally, or you can `(declare (optimize ...))` inside a particular function. It sounds great in theory - and it is great, in some respects - but this granularity makes it harder to ensure all interesting code is steppable when you need it.
Have you thought about writing up your experience?
Btw, I stick to sbcl as I used vim and so far the script here works for me. Might try this when back to do lisp.
In common lisp, you don't need a build system at all; you can `(load "file.lisp")` everything and it should generally just work. But of course, build systems are useful tools, so nonetheless ASDF exists and it's nice enough to the degree that nobody has built a better and more widespread common lisp build system.
Some good trivial examples are in the lisp cookbook:
> ASDF (Another System Definition Facility) is a package format and a build tool for Common Lisp libraries. It is analogous to tools such as Make and Ant.
Contemporary developers using more mainstream languages are likely more familiar with asdf [2], the "Multiple Runtime Version Manager".
[1] https://en.wikipedia.org/wiki/Another_System_Definition_Faci...
> $ autoreconf -i
> (which is not needed in released tarballs) and then run the usual
Why would you do that to yourself and your users in a new project this day and age.
A release tarball should be nothing more than a git snapshot of a commit.
Drop the GNU AutoCrap if you know what's good for you.
One thing to do instead is to just write a ./configure script which detects what you need. In other words, be compatible at the invocation level. Make sure this is checked into the repo Anyone checking out any commit runs that, and that's it.
Someone who makes a tarball using git, out of a tagged release commit, should have a "release tarball".
A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.
Not everything is C, or GNU/Linux. The example also misses much of the basic functionality that makes GNU autotools amazing.
The major benefit of GNU autotools is that it works well, specially for new platforms and cross compilation. If all you care about is your own system, a simple Makefile will do just fine. And with GNU autotools you can also pick to just use GNU autoconf .. or just GNU automake.
Having generated files in the release tarball is a good practise, why should users have to install a bunch of extra tools just to get PDF of the manual or other non-system specific files? It is not just build scripts all over the place, installing TeX Live just to get a PDF manual of something is super annoying.
Writing your own ./configure that works remotely as something users would expect is non-trivial, and complicated -- we did that 30 years ago before GNU autoconf. There is a reason why we stopped doing that ...
I'd go so far to think that GNU autotools is the most sensible build system out there...
Either they use a modern programming language (which typically has an included build system, like rust's cargo or simply go build) of they use simple Makefiles. For C/C++ codebases, it seems like CMake has become the dominant build system.
All of these are typically better than what GNU autoconf offers, with modern modern features and equally or better flexibility to deal with differences between operating systems, distributions, and/or optional or alternative libraries.
I don't really see why anyone would pick autoconf for a modern project.
> I don't really see why anyone would pick autoconf for a modern project.
If you build for your system only and never ever plan to cross compile by all means go with static makefile.
Most of my disdain for Autoconf was formed when I worked at a company where I developed a embedded Linux distro from scratch. I cross-compiled everything. Most of the crap I had to fight with was Autoconf projects. I was having to do things like export various ac_cv_... internal variables that nobody should know about, and patching configure scripts themselves. Fast forward a few years and I see a QEMU everywhere for "cross" builds.
The rest of my disdain comes from having worked with the internals of various GNU programs. To bootstrap their build systems from a repository checkout (not a release tarball) you have to follow their specific instructions. Of course you must have the Autotools installed. But there are multiple versions, and they generate different code. For each program you have to have the right version that it wants. If you have to do a git bisect, older commits may need an older version of the Autotools. Bootstrap from the configure system from scratch, the result of which is the privilege to now run configure from scratch. It's simply insane.
You learn things like to touch certain files in a certain order to prevent a reconfigure that has about a 50% chance of working.
Let's not even going to libtool.
The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.
What I want is something different. I want a user to be able to use any platform where the program works to obtain a checkout if exactly what is in git, and be able to make a patch to the configuration stuff, test it and send upstream without installing anything that is not required for just building the program for use.
https://wiki.debian.org/CrossCompiling https://crossqa.debian.net/
Yeah well this is not quite true. Most embedded distros leverage autotools heavily. In Yocto you just specify autotools as the package class for the recipe and in most cases it will pull, cross compile and package the piece of software for you with no intervention.
The tools are clearly antiquated, written in a questionable taste and 80% of the cases they solve are no longer relevant. They are still very useful for the rest.
Cross compilation for distributions is a mess, but it is because of a wide proliferation of build systems, not because of the GNU autotools -- which have probably the most sane way of doing cross compilation out there. E.g., distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...
> The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.
Nothing could be further from the truth, GNU autoconf started as a bunch of shared ./configure scripts so that programs COULD build on non-GNU system. It is also why GNU autoconf and GNU automake go such far lengths in supporting cross compilation to the point where you can build a compiler that targets one system, runs on another, and was build on a third (Canadian cross compile).
If you need to compile programs that run on the build machine, you should have a ./configure script which allows a host CC and target CC to be specified, and use them accordingly. Even if you deviate a bit from what others are doing, if it is clearly documented and working, the downstream package maintainer can handle it.
I mentioned here recently that I released a personal project under the GPLv3. The very first issue someone filed in GitHub was to ask me to relicense it as something more business friendly. I don't think I've been so offended by an issue before. If I'm writing something for fun, I could not possibly be less interested in helping else someone monetize my work. They can play by Free Software rules, or they can write their own version for themselves and license it however they want. I don't owe them the freedom to make it un-Free.
The fact that this is hosted on a FSF-managed service indicates the author likely sees it similarly.
And yet, this is a single-user labor of love by one person hosting it on FSF’s servers. I don't know them, and this is pure conjecture, but I suspect they probably couldn't care less if that made it challenging for commercial users. There are plenty of other Lisps for them to choose from.
https://common-lisp.net/implementations
I think a full-featured GPLv3 implementation would be very cool, personally.