I generally don’t pay attention to the releases of programming languages unless they’re notable for some reason or another, and I think this one qualifies. Rust is celebrating its ten year anniversary with a brand new release, Rust 1.87.0. This release adds anonymous pipes to the standard library, inline assembly can now jump to labeled blocks in Rust code, and support for the i586 Windows target has been removed. Considering Windows 7 was the last Windows version to support i586, I’d say this is fair.
You can update to the new version using the rustup
command, or wait until your operating system adds it to its repository if you’re using a modern operating system.
20 years and not still producing a stable target api… yeah this is never going to happen.
What about GNU Hurd in Rust ?
What’s the point of Rust if it cannot even interface with itself, which makes it problematic when used in large systems ? If I have to go through translation layers and interface Rust to Rust through outdated C garbage the I might as well use C++ and not bother with writing wrappers. No. I don’t want “repr C”. If I use memory safe language then it should propagate memory safety accross modules like Java, .NET and others.
“You can update to the new version using the rustup command, or wait until your operating system adds it to its repository if you’re using a modern operating system.”
Based on every other modern language I’ve used (I’ve done less Rust than I would like), you may be better off setting up a Rust development environment using whatever the recommended modern tooling is. Trying to use an OS-supplied package often does not go well. After years of experience, I’ve found that producing software is just different than consuming it.
If you want “memory safe language that propagates memory safety accross modules” then I have news for you: C#, Java and other such languages are not scheduled for disappearance from the face of Earth!
Rust serves different niche and that’s Ok, not every language should be used for every task imaginable, we lost that dream decades ago.
C# and Java are great… for high level development. You’re not going to see them in the Linux kernel. Rust gives similar protections for low level development.
benjaminoakes,
That’s just it. I like C#, it has a lot of nice features, but it’s reliance on managed memory is a big con for low level development.
If someone took C# and gave it compile time safety verification without relying on run time managed memory, I think it would be a winner. Rust could theoretically get there and it clearly has merit, but a lot of devs consider rust’s syntax to be a con. It resembles more academic languages, which makes sense. Although syntax is arbitrary, given the importance of migrating away from unsafe code to safe code, it may have been an unfortunate decision that creates unnecessary hurdles for adoption.
If you are more adventurous, I would recommend that you check out F# on .Net. It is an official .Net language and provides the kind of safety that is not available in C#. It allows you to easily consume .Net assemblies and interoperate with other languages in the ecosystem.
This was even a problem for C++. Name mangling (MS: name decorations) worked differently depending on the compiler you used. This has occurred between binutils and glibc (v5 vs v6). As a result, you would always depend on the library vendor to update their distribution for newer versions of the compiler and runtime. On Windows especially, static and dynamic libraries compiled by a previous compiler version would be incompatible with a future version. We did not even have Dependency Walker (analogous to ldd) until much later, to ease the pain in determining what name mangling was used and CRT dependencies. The situation was worse when you have to interoperate between different compiler vendors (GCC, MSVC, BCC, MWCC, etc.).
I believe Rust would eventually get there, but it is going to take a lot more effort due to heavy dependence on compile time checks. Like how is the borrow checker going to ensure that mutable references passed into a compiled library going to be used correctly, when no static code analysis could be done? Problems like this will take time to solve.
However, at the moment, you can use dynamic libraries that has been compiled as part of a rust project. It is not perfect but it allows you to share libraries between binaries, without having to statically link them to each executable.
The problem with Rust is not its reliance on borrow checker (Rust already reasons about functions as about black boxes, shared libraries wouldn’t change anything there), but it’s reliance on generics. They are literally everywhere – and with things like Any… suppose you have Result where Foo and Bar come from different libraries… who should create such type? To pass that type anywhere as dyn Any? Google “How Swift Achieved Dynamic Linking Where Rust Couldn’t” and see what it takes to support something like this. This is gigantic project, comparable in size and scope to everything else that was done to Rust… and the only one who may fund such development is someone who wants to expose Rust ABI as their official OS ABI… maybe Google or Microsoft, definitely not Apple… not gonna happen any time soon.
zde,
I agree, borrow checking isn’t really a problem. Ideally I’d like to see borrow checking work across languages. Generics are a much bigger problem for statically compiled libraries. They would not be a problem for intermediate code like java or other languages that can be partially compiled and only finalized on the target. Alas, rust aims to work like C, which makes it harder to support generics. Even C++ is problematic if you use templates. In practice C++ libraries end up “cheating” by having code in the .h file instead of the .cc file so that the “library” is comprised of code that resides in both the shared object as well as the include file, although I find this very inelegant.
I really think that intermediate code is the solution and the linker just needs to be more sophisticated. But…this is a really hard sell for something that’s meant to replace C.
Good points on the linker and IL. It is possible to deliver libraries as IL and define an interoperability format (like what MS does with ‘netstandard’). Then when you want to deliver native binaries, to convert the IL to machine code. However, this would require significant changes to the language and object metadata. This may not sit well with purists though, as it would shift the checks to the runtime.
adkilla,
I agree there would be a lot of resistance, heck there already is. I don’t see a reason the borrow logic cares about the type as long as the reference is borrowed and returned correctly.
Having an IL brings about another cool benefit over statically built libraries: the ability to compile with CPU specific features/optimizations. With standard software (be it windows or linux) the shared libraries and executables we get have generic optimizations with instructions that target older CPUs for compatibility sake. If you had an IL, the code can be optimized for the current CPU not only using all the latest instructions/SSE levels but even optimizing things like cache lines. This is something that compilers have long been able to do but we generally haven’t benefited from because of the way we distribute one-size-fits-all static binaries.
IL is absolutely not needed for generics. It may be used for inlining things at runtime, but if you accept the fact that passing objects over ABI boundary may impose penalties (like Swift did… thing that has nothing to do with Objective C because Objective C never did generics before Swift added them) then you can do something that Ada and Extended Pascal did decades ago: compile generic function into one, single, polymorphic function… that receives information about types as hidden argument.
It would also make Rust less complex to use because you would also get generic closures, etc.
But devil is in details: one needs to decide who and how would produce these hidden descriptors (or maybe better to make them explicit?), add machinery to work with generic types in a polymorphic function, etc.
No one works on that right now – and that’s why it wouldn’t be done.
zde,
It depends on the implementation. If you want to support them with runtime libraries/logic, then the compiler can emit code that ignores the types at the cost of having to resolve it all at run time. This certainly makes things easier for the compiler, however it comes at the cost of compile time optimization where the types can be baked into the binary. Precompiling the type information at compile time eliminates all the conditionals & jumps that would be needed at run time.
https://learn.microsoft.com/en-us/cpp/extensions/generics-and-templates-visual-cpp
To get to the point, if we want to replace C++ with something else and that something else has to do at runtime what C++ can do at compile time, then I would say that’s at least one objective con. Yeah there are tradeoffs, but having an IL solves this problem nicely. Obviously I understand people will resist IL for having their own cons. There are pros and cons to everything.
For application software I think having an IL is really nice because it not only means we can have target specific optimizations but also makes the software portable across architectures.
But IL would not be tolerated inside of a kernel. This makes me wonder about the merits of a language that could hypothetically support both modes of operation. I know there’s going to be skeptics but I don’t see a reason it could not be done in principal.
Once again: have you actually read “How Swift Achieved Dynamic Linking Where Rust Couldn’t” article?
It explains what is needed and what’s not needed. We know that because Swift did all the work, it does what people want.
Yes, generics with type information specified at runtime would be slower than generics instantiated at compile-time – but that’s not a problem if you compare that to C++ because C++ couldn’t handle generics that cross dynamic library boundary AT ALL.
And you can instantiate generics with known types and leave generics with unknown type till runtime… Swift does that, why couldn’t Rust?
IL is absolutely not needed and not wanted in that scheme (Apple tried it, but ditched and abandoned it, in the end… complexity of the whole thing is just not worth it).
zde,
It’s a question of what tradeoffs you want to make to get there.. I will get to that, but if we’re allowing for an open interpretation of ‘generics’ and ‘templates’, then C++ compilers actually DO actually let you use templates across the DLL boundary, but the type information has to be known the to the compiler.
https://learn.microsoft.com/en-us/cpp/cpp/explicit-instantiation
This is the problem with the C++ approach because the linker only works with statically compiled code and doesn’t know how to generate new code. This simplified linking but places some hard limits on what languages can do.
I never claimed that IL was needed though. My claim is if you’re using IL the problem goes away. Ok so getting to your article…
https://faultlore.com/blah/swift-abi/
The author is not entirely right, or at the very least is using sloppy wording. As I pointed out above C++ developers CAN do it with the caveat that types have to be known at compile time or else left in the header file. I very much appreciate the significance of these cons, but saying “you simply can’t use it” is technically underselling C++. Later on it is mentioned that “a header could just include the template’s implementation”, so the author is aware that C++ devs do have various paths to make things work but I think we all understand that it’s messy. If you squint hard enough, having code in header files is almost like a poor man’s “IL”. Anyway I’ll move on since I think the value of the article lies in explaining Swift rather than C++.
I think the main take away is Swift’s use of vtables and indirection for class structures so that each side of a the DLL boundary has a map of objects. It’s easy to see why this works, but at the same time polymorphism/vtables adds runtime costs, which is a con compared to C++. C++ uses vtables to achieve polymorphic function behavior, but if used it adds more indirection so some C++ devs don’t use the feature. Swift relies on it implicitly for class variables, which I did not know before reading this. In any case though it’s easy to see there are tradeoffs. The author specifically makes the point I was making about the compiler not being able to bake an optimized binary unless those types are known at compile time or an IL use used.
You can make the argument that using polymorphism/vtables are worth the costs. I see it does offer a neat solution that lets us map out difference across DLL boundaries but that comes at the expense of more indirection than is needed for C++ templates or IL. It’s fun to compare all the different ways the problem can be tackled and I’m glad we’re having this discussion.
When it comes to rust, I don’t know that the cons of polymophism/vtables are ones that rust designers would be willing to build into rust. The goal are similar to those of C/C++, these languages deliberately avoid forcing developers to use features/mechanism that have a performance cost. While C++ supports polymorphism, developers elect to do it and don’t pay the costs for it otherwise. It would be interesting to have a language that could switch all of these features on and off and be everything to everyone, but…it’s a bit of a pie in the sky 🙂
The borrow checker was just an example. However, your point comes back to what I have stated about the over reliance on static analysis. Swift leverages years of investments in Objective-C and a dedicated team financially funded and controlled by a single large corporation, which is something Rust is not. It will take Rust much longer to get there. I do see the adoption of Rust growing with more investments coming in (though it is slow). Personally, I’m more hopeful of Rust becoming a mainstream language, than something like Haskell. It will never replace C/C++, which is basically high-level assembly. But it does not have to be a C/C++ killer to become a popular systems language.
If you are looking for a C++ contender, then Zig would be a better option. However, I am curious to see where Carbon would end up, once momentum picks up. Carbon seems to take the ‘Dijkstra’ approach to language design, rather than the ‘quick wins’ approach taken by many languages.
adkilla,
This will be controversial, but IMHO we will have failed if we do not replace C/C++. The whole idea is to find a replacement for unsafe legacy languages or at the very least getting the ball rolling in that direction.
I guess language designers could change C itself to make it safe. However memory safety isn’t the only problem with C, it is antiquated in many other ways. Other languages can offer a better starting point than C, but most fall short because they rely on GC and run time safety checks. Solving this is rust’s claim to fame, but maybe more languages will follow suit. Off the top of my head, I know ADA is working on this too now. I don’t think that most developers even know about it though.