I’m writing a simple functional language with automatic memory management. Go’s simplicity seems it could be a good target for transpilation: garbage collection, decent concurrency paradigm, generally simple/flexible, errors as values. I already know Go quite well, but I have no idea about IR formats (LLVM, etc)
To be clear, using Go as a compiler backend would be a hidden implementation detail and there would be no user-level interop features. I’d like to bundle the Go compiler in my own compiler to save end-user headaches, but not sure how feasible this is. Once my language is stable enough for self-hosting, I’d roll my own backend (likely using Cranelift)
Pros
- Can focus on my language, and defer learning about compiler backends
- In particular, I wouldn’t have to figure out automatic memory management
- Could easily wrap Go’s decent standard library, saving me from a lot of implementation grunt work
- Would likely borrow a lot of the concurrency paradigm for my own language
- Go’s compiler is pretty speedy
Cons
- Seems like an unconventional approach
- Perception issues (thinking of Elm and it’s kernel code controversy)
- Reduce runtime performance tuneability (not to concerned about this TBH)
- Runtime panics would leak the Go backend
- Potential headaches from bundling the Go compiler (both technical and legal)
- Not idea how tricky it would be to re-implement the concurreny stuff in my own backend
So, am I crazy for considering Go as compiler backend while I get my language off the ground?
Go as a backend language isn’t super unusual, there’s at least one other project (https://borgo-lang.github.io) which chosen it. And there are many languages which compile to JavaScript or C, but Go strikes a balance between being faster than JavaScript but having memory management vs. C.
I don’t think panics revealing the Go backend are much of an issue, because true “panics” that aren’t handled by the language itself are always bad. If you compile to LLVM, you must implement your own debug symbols to get nice-looking stack traces and line-by-line debugging like C and Rust, otherwise debugging is impossible and crashes show you raw assembly. Even in Java or JavaScript, core dumps are hard to debug, ugly, and leak internal details; the reason these languages have nice exceptions, is because they implement exceptions and detect errors on their own before they become “panics”, so that when a program crashes in java (like tries to dereference null) it doesn’t crash the JVM. Golang’s backtrace will probably be much nicer than the default of C or LLVM, and you may be able to implement a system like Java which catches most errors and gives your own stacktrace beforehand.
Elm’s kernel controversy is also something completely different. The problem with Elm is that the language maintainers explicitly prevented people from writing FFI to/from JavaScript except in the maintainers’ own packages, after allowing this feature for a while, so many old packages broke and were unfixable. And there were more issues: the language itself was very limited (meaning JS FFI was essential) and the maintainers’ responses were concerning (see “Why I’m leaving Elm”). Even Rust has features that are only accessible to the standard library and compiler (“nightly”), but they have a mechanism to let you use them if you really want, and none of them are essential like Elm-to-JS FFI, so most people don’t care. Basically, as long as you don’t become very popular and make a massively inconvenient, backwards-incompatible change for purely design reasons, you won’t have this issue: it’s not even “you have to implement Go FFI”, not even “if you do implement Go FFI, don’t restrict it to your own code”, it’s “don’t implement Go FFI and allow it everywhere, become very popular, then suddenly restrict it to your own code with no decent alternatives”.