I had to deal with a lot of FFI to enable a Java Constraint Solver (Timefold) to call functions defined in CPython. In my experience, most of the performance problems from FFI come from using proxies to communicate between the host and foreign language.
A direct FFI call using JNI or the new foreign interface is fast, and has roughly the same speed as calling a Java method directly. Alas, the CPython and Java garbage collectors do not play nice, and require black magic in order to keep them in sync.
On the other hand, using proxies (such as in JPype or GraalPy) cause a significant performance overhead, since the parameters and return values need to be converted, and might cause additional FFI calls (in the other direction). The fun thing is if you pass a CPython object to Java, Java has a proxy to the CPython object. And if you pass that proxy back to CPython, a proxy to that proxy is created instead of unwrapping it. The result: JPype proxies are 1402% slower than calling CPython directly using FFI, and GraalPy proxies are 453% slower than calling CPython directly using FFI.
What I ultimately end up doing is translating CPython bytecode into Java bytecode, and generating Java data structures corresponding to the CPython classes used. As a result, I got a 100x speedup compared to using proxies. (Side note: if you are thinking about translating/reading CPython bytecode, don't; it is highly unstable, poorly documented, and its VM has several quirks that make it hard to map directly to other bytecodes).
Speaking from zero experience, the FFI stories of both Python and Java to C seems much better. Wouldn't going connecting them via a little C bridge a general solution?
cchianel 58 minutes ago [-]
JNI/the new Foreign FFI communicate with CPython via CPython's C API. The primary issue is getting the garbage collectors to work with each other. The Java solver works by repeatedly calling user defined functions when calculating the score. As a result:
- The Java side needs to store opaque Python pointers which may have no references on the CPython side.
- The CPython side need to store generated proxies for some Java objects (the result of constraint collectors, which are basically aggregations of a solution's data).
Solving runs a long time, typically at least a hour (although you can modify how long it runs for). If we don't free memory (by releasing the opaque Python Pointer return values), we will quickly run out of memory after a couple of minutes. The only way to free memory on the Java side is to close the arena holding the opaque Python pointer. However, when that arena is closed, its memory is zeroed out to prevent use-after-free. As a result, if CPython haven't garbage collected that pointer yet, it will cause a segmentation fault on the next CPython garbage collection cycle.
JPype (a CPython -> Java bridge) does dark magic to link the JVM's and CPython's garbage collector, but has performance issues when calling a CPython function inside a Java function, since its proxies have to do a lot of work. Even GraalPy, where Python is ran inside a JVM, has performance issues when Python calls Java code which calls Python code.
high_na_euv 1 hours ago [-]
How IPC methods would fit such cases?
Like, talk over some queue, file, http, etc
cchianel 43 minutes ago [-]
IPC methods were actually used when constructing the foreign API prototype, since if you do not use JPype, the JVM must be launched in its own process. The IPC methods were used on the API level, with the JVM starting its own CPython interpreter, with CPython and Java using `cloudpickle` to send each other functions/objects.
Using IPC for all internal calls would probably take significant overhead; the user functions are typically small (think `lambda shift: shift.date in employee.unavailable_dates` or `lambda lesson: lesson.teacher`). Depending on how many constraints you have and how complicated your domain model is, there could be potentially hundreds of context switches for a single score calculation. It might be worth prototyping though.
chris12321 15 hours ago [-]
Between Rails At Scale and byroot's blogs, it's currently a fantastic time to be interested in in-depth discussions around Ruby internals and performance! And with all the recent improvements in Ruby and Rails, it's a great time to be a Rubyist in general!
jupp0r 14 hours ago [-]
Is it? To me it seems like Ruby is declining [1]. It's still popular for a specific niche of applications, but to me it seems like it's well past its days of glory. Recent improvements are nice, but is a JIT really that exciting technologically in 2025?
Ruby will probably never again be the most popular language in the world, and it doesn't need to be for the people who enjoy it to be excited about the recent improvements in performance, documentation, tooling, ecosystem, and community.
faizshah 11 hours ago [-]
I think ruby can get popular again with the sort of contrarian things Rails is doing like helping developers exit Cloud.
There isn’t really a much more productive web dev setup than Rails + your favorite LLM tool. Will take time to earn Gen Z back to Rails though and away from Python/TS or Go/Rust.
jimmaswell 9 hours ago [-]
My impression is that a Rails app is an unmaintainable dynamically-typed ball of mud that might give you the fast upfront development to get to a market or get funded but will quickly fall apart at scale, e.g. Twitter fail whale. And Ruby is too full of "magic" that quickly makes it too hard to tell what's going on or accidentally make something grossly inefficient if you don't understand the magic, which defeats the point of the convenience. Is this perception outdated, and if so what changed?
m00x 9 hours ago [-]
Rails can become a ball of mud as much as any other framework can.
It's not the fastest language, but it's faster than a lot of dynamic languages. Other than the lack of native types, you can manage pretty large rails apps easily. Chime, Stripe, and Shopify all use RoR and they all have very complex, high-scale financial systems.
The strength of your tool is limited to the person who uses the tool.
amomchilov 9 hours ago [-]
The unrefactorable ball of mud problem is real, which is why both Stripe and Shopify have highly statically typed code bases (via Sorbet).
Btw Stripe uses Ruby, but not Rails.
byroot 7 hours ago [-]
I'd say sorbet largely adds to the mud, but to each their own.
fredrikholm 7 hours ago [-]
> It's not the fastest language, but it's faster than a lot of dynamic languages.
Such as?
IME Ruby consistently fall behind, often way behind, nearly all popular languages in "benchmark battles".
Lio 6 hours ago [-]
Python? Ruby with YJIT, JRuby or Truffle Ruby usually beats python code in benchmarks.
I haven’t seen a direct comparisons but I wouldn’t be surprised if Truffle Ruby was already faster than either elixir, erlang or php for single threaded CPU bound tasks too.
Of course that’s still way behind other languages but it’s still surprisingly good.
relistan 6 hours ago [-]
In my work I’ve seen that TruffleRuby codebases merging Ruby and Java libraries can easily keep pace with Go in terms of requests per second. Of course, the JVM uses more memory to do it. I mostly write Go code these days but Ruby is not necessarily slow. And it’s delightful to code in.
fredrikholm 5 hours ago [-]
> Python? Ruby with YJIT, JRuby or Truffle Ruby usually beats python code in benchmarks.
Isn't that moving the goal post a lot?
We wen't from 'faster than a lot of others' to 'competing for worst in class'.
I'm not trying to be facetious, I'm curious as I often read "X is really fast" where X is a functional/OOP language that nearly always ends up being some combination of slow and with huge memory overhead. Even then, most Schemes (or Lisps in general) are faster.
Being faster single threaded against runtimes that are built specifically for multithreaded, distributed workloads is also perhaps not a fair comparison, esp. when both runtimes are heavily used to write webservers. And again, Erlang (et al) come out faster even in those benchmarks.
Is TruffleRuby production (eg. Rails) ready? If so, is it that much faster?
I remember when the infamous "Truffle beats all Ruby implementations"-article came out that a lot of Rubyists were shooting it down, however this was several years ago by now.
Lio 3 hours ago [-]
Moving the goal posts? Perhaps I misunderstand what you are asking.
Python is the not the worst in class scripting language. For example perl and TCL are both slower than python.
Originally you just asked, "such as" [which dynamic language ruby is faster than?]
Implying ruby is slower than every other dynamic language, which is not the case.
JRuby is faster than MRI Ruby for some Rails workloads and very much production ready.
Truffle Ruby is said to be about 97% compatible with MRI on the rubyspec but IMHO isn't production ready for Rails yet. It does work well enough for many stand alone non-rails tasks though and could potentially be used for running Sidekiq jobs.
The reason to mention the alternative ruby runtimes is to show that there's nothing about the language that means it can't improve in performance (within limits).
Whilst it's true that ruby is slower than Common Lisp or Scheme, ruby is still improving and the gap is going to greatly reduce, which is good news for those of us that enjoy using it.
fredrikholm 2 hours ago [-]
Thank you for a great answer; I did not mean any ill will and apologize if that was how it came across.
Perl, Tcl, Smalltalk etc are basically non-existant from where I'm from, so they didn't occur to me.
Perhaps I'm projecting a lot here. I have worked a lot in high performance systems and am often triggered by claims of performance, eg. 'X is faster than C' when this is 99.9% of the times false by two orders of magnitude. This didn't happen here.
Thank you for taking the time to answer.
Lio 2 hours ago [-]
> I did not mean any ill will and apologize if that was how it came across.
Oh not at all, no I didn't think that. I'm enjoying the conversation.
It's interesting that you mention Smalltalk as I believe that some of the JIT ideas we're seeing in YJIT are borrowed from there.
As for all the "faster than C" talk here is very specific to ruby (or JIT'd) runtimes and overheads only in that context.
I think it gets mentioned because it seems so counter intuitive at first. It's not to imply C isn't orders of magnitude faster in general.
Along with the new out of the box features of Rails 8, the work on Ruby infrastructure is making it an exciting technology to work with again (IMHO).
nirvdrum 5 hours ago [-]
If the the Twitter fail whale is your concern, then your perception is outdated. Twitter started moving off Ruby in 2009. Both the CRuby VM and Rails have seen extensive development during that decade and a half.
I never worked at Twitter, but based on the timeline it seems very likely they were running on the old Ruby 1.8.x line, which was a pure AST interpreter. The VM is now a bytecode interpreter that has been optimized over the intervening years. The GC is considerably more robust. There's a very fast JIT compiler included. Many libraries have been optimized and bugs squashed.
If your concern is Rails, please note that also has seen ongoing development and is more performant, more robust, and I'd say better architected. I'm not even sure it was thread-safe when Twitter was running on it.
You don't have to like Ruby or Rails, but you're really working off old data. I'm sure there's a breaking point in there somewhere, but I very much doubt most apps will hit in before going bust.
faraaz98 8 hours ago [-]
Twitter fail whale was more skill issue that Rails shortcomings. If you read the book Hatching Twitter, you'll know quickly they weren't great at code
saagarjha 9 hours ago [-]
Was it ever
genewitch 3 hours ago [-]
There was a cycle in 2012 or so. I reckon PHP has more lines of code deployed.
But C?
adamtaylor_13 14 hours ago [-]
Rails is experiencing something of a renaissance in recent years. It’s easily one of the most pleasant programming experiences I’ve had in years.
All my new projects will be Rails. (What about projects that don’t lend themselves to Rails? I don’t take on such projects ;)
cship2 13 hours ago [-]
Hmm I thought Crystal was suppose to be faster Ruby? No?
mbb70 12 hours ago [-]
No one uses Ruby because it is fast. They use it because it is an ergonomic language with a battle-tested package for every webserver based activity you can code up.
brigandish 10 hours ago [-]
> No one uses Ruby because it is fast.
Well, because it isn't.
Crystal is an ergonomic language, too, looking a lot like Ruby even beyond a cursory glance. What Ruby has, like any longstanding language, is a large number of packages to help development along, so languages like Crystal have to do a lot of catching up. Looking at the large number of abandoned gems though, I'm not sure it's that big a difference, the most important ones could be targeted.
I'm not sure that has any relevance when compared with Python or JS or Go though, they seem to have thriving ecosystems too - is Rails really that much better than the alternatives? I wouldn't know but I highly doubt it.
Alifatisk 4 hours ago [-]
> Crystal was suppose to be faster Ruby
No, it never intended to be a replacement to Ruby. They share similarities in syntax, the same way Elixirs syntax reminds you of Ruby.
If you want faster Ruby, check out MRuby (not always an drop-in replacement though).
obiefernandez 13 hours ago [-]
Stable mature technology trumps glory.
jupp0r 6 hours ago [-]
That’s why the JVM has been using JITs since 1993 while it’s a renaissance inspiring marvel for Ruby in 2025.
pjmlp 7 hours ago [-]
Unfortunately it is, because too many folks still reach out to pure interpreted languages for full blown applications, instead of plain OS and application scripting tasks.
cpursley 27 minutes ago [-]
Can anyone still make a case to start a new project in Rails in 2025 when there is Elixir LiveView?
I enjoy Ruby but activerecord is a mess and the language is slow and lacks real time functionality.
haberman 14 hours ago [-]
> Rather than calling out to a 3rd party library, could we just JIT the code required to call the external function?
I think LuaJIT's FFI is very fast for this reason.
internetter 15 hours ago [-]
"write as much Ruby as possible, especially because YJIT can optimize Ruby code but not C code"
I feel like I'm not getting something. Isn't ruby a pretty slow language? If I was dipping into native I'd want to do as much in native as possible.
hinkley 15 hours ago [-]
There was a little drama that played out as Java was getting a proper JIT.
In one major release, there was a bunch of Java code responsible for handling some UI element activities. It was found to be a bottleneck, and rewritten in C code for the next major release.
Then the JIT became properly useful, and the FFI overhead was more than the difference between the hand-tuned C code and what the JIT would spit out on its own. So in the next major release, they rolled back to the all-Java implementation.
Java had a fairly reasonably fast FFI for that generation of programming language, but they swapped for a better one a few releases after that. And by then I wasn't doing a lot of Java UI code so I had stopped paying attention. But around the same time they were also making a cleaner interface between the platform-specific and the general Java code for UI, so I'm not entirely sure how that played out.
But that's exactly the sort of see-sawing you need to at least keep an eye out for when doing this sort of work. Would you be better off waiting a couple milestones and saving yourself a bunch of hand-tuning work, or do you need it right now for political or technical reasons?
pjmlp 15 hours ago [-]
That is where a JIT enters the picture, ideally a JIT can re-optimize to an ideal state.
While this is suboptimal when doing one shot execution, when an application is long lived, mostly desktop or server workloads, this work pays off versus the overall application.
For example, Dalvik had a pretty lame JIT, thus it was faster calling into C for math functions, eventually with ART this was no longer needed, JIT could outperform the cost of calling into C.
Depending on the math (this is a hedge) you need, FORTRAN is probably faster still. Every time I fathom a test and compare python, fortran, and C, Fortran always wins by a margin. Fortran:C:Python 1:1.2:1.9 or so. I don't count startup I only time time to return from function call.
Most recently I did hand-looped matrix math and this ratio bore out.
I used gfortran, gcc, and python3.
pjmlp 2 hours ago [-]
Sure, but that doesn't fit the desktop or server workloads I mentioned, I guess we need to except stuff like HPC out of those server workloads.
I would also add that modern Fortran looks quite sweet, the punched card FORTRAN is long gone, and folks should spend more time learning it instead of reaching out to Python.
If FFI calls are slow (even slower than Ruby -> Ruby calls) then informs the way you use native code. You look for workflows whereby frequent calls to a FFI function are avoided: e.g. large number of calls in some inner loop. Suppose such a situation cannot be avoided. Then you may have no recourse than to move that loop out of Ruby into C: create a custom FFI for that use case which you can call once and have it execute the loop, calling many times the function you really wanted to call.
If the FFI call can be made faster, maybe you can keep the loop in Ruby.
Of course that is attractive to people writing an application in Ruby.
That's how I interpret keeping as much code Ruby as possible.
Nobody in their right mind wants to add additional application-specific jigs written in C just to use some C piece.
Once you start doing that, why even have FFI; you can just create a module.
One attractive point about FFI is that you can take some C library and use it in a higher level language without writing a line of C.
doppp 15 hours ago [-]
It's been fast for a while now.
schneems 11 hours ago [-]
To add some nuance to the word "fast."
When we optimize Ruby for performance we debate how to eliminate X thousand heap allocations. When people in Rust optimize for performance, they're talking about how to hint to the compiler that the loop would benefit from SIMD.
Two different communities, two wildly different bars for "fast." Ruby is plenty performant. I had a python developer tell me they were excited for the JIT work in Ruby as they hoped that Python could adopt something similar. For us the one to beat (or come closer to) would be Node.js. We are still slower then them (lots of browser companies spent a LOT of time optimizing javascript JIT), but I feel for the relative size of the communities Ruby is able to punch above its weight. I also feel that we should be celebrating tides that raise all ships. Not everything can (or should be) written in C.
I personally celebrate any language getting faster, especially when the people doing it share as widely and are as good of a communicator as Aaron.
Thaxll 15 hours ago [-]
Even a 50% or 2x speed improvment still make it a pretty slow language. It's in the Python range.
CyberDildonics 15 hours ago [-]
What is fast here? Ruby has usually been about 1/150th the speed of C.
kenhwang 15 hours ago [-]
If the code JITs well, Ruby performs somewhere between Go and NodeJS. Without the JIT, it's similar to Lua.
neonsunset 14 hours ago [-]
Node uses V8 which has a very advanced JIT compiler for the hot code, which does a lot of optimizations for reducing the impact of JS's highly dynamic type system.
The claim that Ruby YJIT beats this is not supported by the data to put it mildly:
(and Go is not a paragon of absolute performance either)
kenhwang 14 hours ago [-]
Not at all saying Ruby's compiler is more capable, more that typical Ruby code is easier to optimize by their JIT design than typical JS, largely because Ruby's type system is more sane.
The whitepapers that inspired Ruby's JIT was first tested against a saner subset of JS, and shown to have some promising performance improvement. The better language/JIT compatibility is why the current Ruby JIT is actually showing performance improvements over the previous more traditionally designed JIT attempts.
JS can get insanely fast when it's written like low level code that can take advantage of its much more advanced compiling abilities; like when it's used as a WASM target with machine generated code. But humans tend to not write JS that way.
Agreed about Go as well, it tends to be on the slow side for compiled languages. I called it out not as an example of a fast language, but because it's typical performance is well known and approximately where the upper bound of how fast Ruby can get.
plagiarist 12 hours ago [-]
I did a quick search for the white papers and couldn't find them. Would you be kind enough to leave a link or a title? It sounds interesting, I'd like to read more.
What does this mean? Any JIT can make a tight loop of math and arrays run well but that doesn't mean a typical program runs well.
kenhwang 14 hours ago [-]
For Ruby, it's code where variables and method input/return types can be inferred to remain static, and variable scope/lifetime is finite. From my understanding, much of the performance gain was from removing the need for a lot of type checking and dynamic lookup/dispatch when types were unknown.
So basically, writing code similarly to a statically typed compiled language.
nicoburns 10 hours ago [-]
Has it? I thought Ruby was pretty much the benchmark for the slowest language. What is it faster than?
pansa2 9 hours ago [-]
Python. At least, it was a few years ago. Both languages have added JIT compilers since then, so I’m not sure how the most recent versions compare.
chefandy 8 hours ago [-]
I see some people say this but I’ve never seen a benchmark support that if you include performance-focused distributions like pypy.
Is there somewhere with benchmarks that supports the idea that Ruby is faster than Python?
pjmlp 7 hours ago [-]
Because unfortunately PyPy is largely ignored by the community , despite their heroic efforts.
7 hours ago [-]
pansa2 8 hours ago [-]
Fair point, I was specifically referring to the CPython interpreter. I don’t know if there’s a benchmark that compares PyPy to JIT-compiled Ruby.
epcoa 7 hours ago [-]
Tcl, Vbscript, bash/sh.
Tcl had it’s web moment during the first dot com era within AOLserver.
pjmlp 5 hours ago [-]
Not only, you are missing Vignete, and our own Safelayer (yes I know it isn't public).
However exactly because of the experience writing Tcl extensions all the time for performance, since 2003 I no longer use programming languages without JIT/AOT other than for scripting taks, or when the decision is external.
The founders at our startup went on to create OutSystems, with many of the learnings, but using .NET instead, after we were given access to .NET during its "Only for MSFT partners eyes" early state.
m00x 9 hours ago [-]
Python
kevingadd 15 hours ago [-]
When dealing with a managed language that has a JIT or AOT compiler it's often ideal to write lots of stuff in the managed language, because that enables inlining and other optimizations that aren't possible when calling into C.
This is sometimes referred to as "self-hosting", and browsers do it a lot by moving things into privileged JavaScript that might normally have been written in C/C++. Surprisingly large amounts of the standard library end up being not written in native code.
kenhwang 15 hours ago [-]
Ruby has realized this as well. When running in YJIT mode, some standard library methods switch to using a pure ruby implementation instead of the C implementation because the YJIT-optimized-ruby is better performing.
internetter 15 hours ago [-]
Oh, I am indeed surprised! I guess I always assumed that most of the JavaScript standard library was written in C++
achierius 13 hours ago [-]
Well, most all of the compiler, runtime, allocator, garbage collector, object model, etc, are indeed written in C++
And so are many special operations (eg crypto functions, array sorts, walking the stack)
But specifically with regards to library functions, like the other commentator said, losing out on in lining sucks, and crossing between JS and Native code can be pretty expensive, so even with things like sorting an array it can be better to do it in js to avoid the overhead... Eg esp in cases where you can provide a callback as your comparator, which is js, and thus you have to cross back into js for every element
So it's a balancing game, and the various engines have gone back and forth on which functions are implemented in which language over time
neonsunset 15 hours ago [-]
FFI presents an opaque, unoptimizable boundary of code. Having chatty code like this is going to cost a lot. To the point where this is even a factor in much faster languages with zero-cost-ish interop like C# - you still have to make a call, sometimes paying the cost of modifying state flags for VM (GC transition).
If Ruby YJIT is starting to become a measurable factor (after all, it was slower than other, purely interpreted, languages until recently), then the same rule as above will become more relevant.
aidenn0 11 hours ago [-]
Why does this need to be JIT compiled? If it could be written in C, then it certainly could just be compiled at load time, no?
nirvdrum 6 hours ago [-]
If what could be written in C? The FFI library allows for dynamic binding of library methods for execution from Ruby without the need to write a native extension. That's a huge productivity boost and makes for code that can be shared across CRuby, JRuby, and TruffleRuby.
I suppose if you could statically determine all of the bindings at boot up you could write a stub and insert into the method table. But, that still would happen at runtime, making it JIT. And it wouldn't be able to adapt to the types flowing through the system, so it'd have to be conservative in what it accepts or what it optimizes, which is what libffi already does today. The AOT approach is to write a native extension.
IshKebab 5 hours ago [-]
> Even in those cases, I encourage people to write as much Ruby as possible, especially because YJIT can optimize Ruby code but not C code.
But the C code is still going to be waaay faster than the Ruby code even with YJIT. That seems like an odd reason to avoid C. (I think there are other good reasons though.)
Alifatisk 4 hours ago [-]
> the C code is still going to be waaay faster than the Ruby code even with YJIT.
I can't find it but I remember seeing a talk where they showed examples of Ruby + YJIT hitting the same speed and in some cases a bit more than C. The downside was though that it required to some warmup time.
eay_dev 8 hours ago [-]
I've been using Ruby more than 10 years, and seeing its development in these days is very exciting. I hope
nialv7 14 hours ago [-]
isn't this exactly what libffi does?
kazinator 12 hours ago [-]
libffi is slow; it doesn't JIT as far as I know.
In libffi you built up descriptor objects for functions. These are run-time data structures which indicate the arguments and return value types.
When making a FFI call, you must pass in an array of pointers to the values you want to pass, and the descriptor.
Inside libffi there is likely a loop which walks the loop of values, while at the same time traversing the descriptor, and places those values onto the stack in the right way according to the type indicated in the descriptor. When the function is done, it then pulls out the return according to its type. It's probably switching on type for all these pieces.
Even if the libffi call mechanism were JITted, the preparation of the argument array for it would still be slow. It's less direct than a FFI jit that directly accesses the arguments without going through an intermediate array.
FFI JIT code will directly take the argument values, convert them from the Ruby (or whatever) type to the C type, and stick it into the right place on the stack or register, and do that with inline code for each value. Then call the function, and convert the return value to the Ruby type.
Basically as if you wrote extension code by hand:
If there is type inference, the conversion code can skip type checks. If we have assurance that arg1 is a Ruby string, we can use an unsafe, faster version of the RubyToCString function.
The JIT code doesn't have to reflect over anything other than at worst the Ruby types. It doesn't have to have any array or list related to the arguments. It knows which C types are being converted to and form, and that is hard-coded: there is no data structure describing the C side that has to be walked at run0-time.
tenderlove 14 hours ago [-]
libffi can't know how to unwrap Ruby types (since it doesn't know what Ruby is). The advantage presented in this post is that the code for type unboxing is basically "cached" in the generated machine code based on the information the user passes when calling `attach_function`.
dzaima 13 hours ago [-]
libffi doesn't JIT for FFI calls; and it still requires you to lay out argument values yourself, i.e. for a string argument you'd still need to write code that converts a Ruby string object to a C string pointer. And libffi is rather slow.
(the tramp.c linked in a sibling comment is for "reverse-FFI", i.e. exposing some dynamic custom operation as a function pointer; and its JITting there amounts to a total of 3 instructions to call into precompiled code)
almostgotcaught 14 hours ago [-]
you know i thought i knew what libffi was doing (i thought it was playing tricks with GOT or something like that) but i think you're right
I can sense why it didn’t go to tenderlovemaking.com
tenderlove 15 hours ago [-]
I think tenderworks wrote this post.
pestatije 16 hours ago [-]
FFI - Foreign Function Interface, or how to call C from Ruby
tonetegeatinst 14 hours ago [-]
The totally safe and sane approach is to write C code that gets passed data via the command line during execution, then vomits results to the command line or just into a memory page.
Then just execute the c program with your flags or data in the terminal using ruby and viola, ruby can run C code.
grandempire 12 hours ago [-]
This. I think many people do not understand Unix processes and don’t realizing how rare it is to need bindings, ffi, and many libraries.
How many programs have an https client in them because they need to make one request and didn’t know they could use curl?
nirvdrum 10 hours ago [-]
Can you please elaborate on this because I'm struggling to follow your suggestion. Shelling out to psql every time I want to run an SQL query is going to be prohibitively slow. It seems to me you'd need bindings in almost the exact same cases you'd use a shared library if you were writing in C and that's really all bindings are anyway -- a bridge between the VM and a native library.
grandempire 9 hours ago [-]
Spawning a process isn't the right tool for ALL X-language communication. But sometimes it is - and the bias tends to be to overlook these opportunities. When you are comfortable using libraries, you make more libraries. When you know how to use programs, you more often make programs.
> Shelling out to psql
I would recommend using a Postgres connection library, because that's how Postgres is designed.
Note that ongoing communication can still work with a multi-process stdin/stdout design. This is how email protocols work. So someone could design a SQL client that works this way.
I have absolutely written batch import scripts which simply spawn psql for every query, with great results.
> in almost the exact same cases you'd use a shared library
That's the thing. Libraries are an entangling relationship (literally in your binary). Programs in contrast have a clean, modular interface.
So for example you can choose to load the imagemagick library, or you can spawn imagemagick. Which one is better depends, but most often you don't need the library.
Here is a list of examples I have seen solved with a library that were completely unnecessary:
- identify the format of an image.
- zip some data
- post usage analytics to a web server at startup
- diff two pieces of data
- convert math symbols to images
- convert x format to y
I have even seen discourse online that suggests that if you are serious about AI your web stack needs to be in python - as if you can't spawn a process for an AI job.
nirvdrum 6 hours ago [-]
> Spawning a process isn't the right tool for ALL X-language communication. But sometimes it is
I'm with you here.
> ...many people do not understand Unix processes and don’t realizing how rare it is to need bindings, ffi, and many libraries
But, this is a much stronger claim.
I can't tell if you're making a meta point or addressing something in the Ruby ecosystem. I mentioned database library bindings because that's far and away the most common usage in the Ruby ecosystem, particularly because of its frequent adoption for web applications.
The author is advocating for not using native code at all if you can avoid it. Keep as much code in Ruby as you can and let the JIT optimize it. But, if you do need bindings, it'd be great if you didn't have to write a native extension. There are a lot of ways to shoot yourself in the foot and they complicate the build pipeline. However, historically, FFI has been much slower than writing a native extension. The point of this post is to explore a way to speed up FFI in the cases where you need it.
It needs to be taken on faith that the author is doing this work because he either has performance sensitive code or needs to work with a library. Spawning a new process in that case is going to be slower than any of the options explored in the post. Writing and distributing a C/C++/Zig/Rust/Go application as part of a Rails app is a big hammer to swing and complicates deployments (a big part of the reason to move away from native extensions). It's possible the author is just complicating things unnecessarily, but he's demonstrated a clear mastery of the technologies involved so I'm willing to give him the benefit of the doubt.
A frequent critique of Ruby is that it's slow. Spawning processes for operations on the hot path isn't going to help that perception. I agree there are cases where shelling out makes sense. E.g., spawning out to use ImageMagick has proven to be a better approach than using bindings when I want to make image thumbnails. But, those are typically handled in asynchronous jobs. I'm all for anything we can do to speed up the hot path and it's remarkable how much performance was easily picked up.
fomine3 11 hours ago [-]
It's slow
brigandish 9 hours ago [-]
It's an aside, but
> Now, usually I steer clear of FFI, and to be honest the reason is simply that it doesn’t provide the same performance as a native extension.
I usually avoid it, or in particular, gems that use it, because compilation can be such a pain. I've found it easier to build it myself and cut out the middleman of Rubygems/bundler.
shortrounddev2 15 hours ago [-]
Does ruby have its equivalent to typescript, with type annotations? The language sounds interesting but I tend not to give dynamically typed languages the time of day
dragonwriter 15 hours ago [-]
> Does ruby have its equivalent to typescript, with type annotations?
Ruby has a first party external type definition format (RBS) as well as third-party typecheckers that check ruby against RBS definitions.
There is probably more use of the older, all third-party typing solution (Sorbet) though.
Alifatisk 4 hours ago [-]
> Does ruby have its equivalent to typescript, with type annotations?
There's https://sorbet.org/ but it's not clear whether it has much adoption.
zem 15 hours ago [-]
I continue to think it was a big mistake not to add syntactic support for type annotations into the base language. python did this right; annotations are not enforced by the interpreter, but are accessible both by external tools as part of the AST and bytecode, and by the running program via introspection, so tools and libraries can do all sorts of interesting things with them.
having to add annotations in a separate header file is simply too high friction to get widespread adoption.
Lammy 15 hours ago [-]
IMHO (and I don't expect most people to agree but please be tolerant of my opinion!) annotations are annoying busywork that clutter my code and exist just to make people feel smart for “““doing correctness”””. The only check I find useful is nil or not-nil, and any halfway-well-designed interface should make it impossible for some unexpected object type to end up in the wrong place anyway. For anything less than halfway-well-defined, you have bigger issues than a lack of type annotation.
edit: I am quite fond of `case ::Ractor::receive; when SomeClass then …; when SomeOtherClass then …; end` as the main pattern for my Ractors though :)
zem 14 hours ago [-]
as your codebase and number of collaborators get larger, it's super useful to have the type checker be able to tell you "hey, you said your function arg could be a time or an int, but you are calling time-specific methods on it" or conversely "the function you are calling says it accepts time objects but you are passing it an int"
also once you get into jit compilation you can do some nice optimisations if you can treat a variable type as statically known rather than dynamic.
and finally, even if you're not writing python at scale it can be very nice to use the type annotations to document your function parameters.
Lammy 14 hours ago [-]
> also once you get into jit compilation you can do some nice optimisations if you can treat a variable type as statically known rather than dynamic.
This is something I hadn't considered. Thanks for mentioning it :)
FooBarWidget 11 hours ago [-]
Sorbet is the most mature option. RBS barely has any tooling, while Sorbet works well.
It definitely isn't at the level of Typescript adoption, even relatively speaking. And it's more clunky than Typescript. But it works well enough to be valuable.
Lio 6 hours ago [-]
It's interesting that RBS support is used by IRB for type completion.
There is also work to settle on an inline form of RBS so I could see it taking over from Sorbet annotations in the future.
teaearlgraycold 13 hours ago [-]
This is the main thing keeping me from going back to Ruby. I don’t want to go back to the stone age where there’s no or poor static analysis
nirvdrum 6 hours ago [-]
If you're looking for static typing a dynamic language is going to be a poor fit. I find a place for both. I love Rust, but trying to write a tool that consumed a GraphQL API with was a brutal exercise in frustation. I'd say that goes for typing of JSON or YAML or whatever structured format in general. It's refreshing being able to just work with data in the form I already know it's in. Ruby can be an incredibly productive language to work with.
If you're looking for static analysis in general, please note that there are mature tools available. Rubocop¹ is probably the most popular and allows for linting and code formatting. Brakeman² is a vulnerability scanner for Rails. Sorbet³ is a static type checker.
The tooling is there if you want to try things out. But, if you want a statically typed language then that's a debate that's been going since the dawn of programming language design. I doubt it's going to get resolved in this thread.
A direct FFI call using JNI or the new foreign interface is fast, and has roughly the same speed as calling a Java method directly. Alas, the CPython and Java garbage collectors do not play nice, and require black magic in order to keep them in sync.
On the other hand, using proxies (such as in JPype or GraalPy) cause a significant performance overhead, since the parameters and return values need to be converted, and might cause additional FFI calls (in the other direction). The fun thing is if you pass a CPython object to Java, Java has a proxy to the CPython object. And if you pass that proxy back to CPython, a proxy to that proxy is created instead of unwrapping it. The result: JPype proxies are 1402% slower than calling CPython directly using FFI, and GraalPy proxies are 453% slower than calling CPython directly using FFI.
What I ultimately end up doing is translating CPython bytecode into Java bytecode, and generating Java data structures corresponding to the CPython classes used. As a result, I got a 100x speedup compared to using proxies. (Side note: if you are thinking about translating/reading CPython bytecode, don't; it is highly unstable, poorly documented, and its VM has several quirks that make it hard to map directly to other bytecodes).
For more details, you can see my blog post on the subject: https://timefold.ai/blog/java-vs-python-speed
- The Java side needs to store opaque Python pointers which may have no references on the CPython side.
- The CPython side need to store generated proxies for some Java objects (the result of constraint collectors, which are basically aggregations of a solution's data).
Solving runs a long time, typically at least a hour (although you can modify how long it runs for). If we don't free memory (by releasing the opaque Python Pointer return values), we will quickly run out of memory after a couple of minutes. The only way to free memory on the Java side is to close the arena holding the opaque Python pointer. However, when that arena is closed, its memory is zeroed out to prevent use-after-free. As a result, if CPython haven't garbage collected that pointer yet, it will cause a segmentation fault on the next CPython garbage collection cycle.
JPype (a CPython -> Java bridge) does dark magic to link the JVM's and CPython's garbage collector, but has performance issues when calling a CPython function inside a Java function, since its proxies have to do a lot of work. Even GraalPy, where Python is ran inside a JVM, has performance issues when Python calls Java code which calls Python code.
Like, talk over some queue, file, http, etc
Using IPC for all internal calls would probably take significant overhead; the user functions are typically small (think `lambda shift: shift.date in employee.unavailable_dates` or `lambda lesson: lesson.teacher`). Depending on how many constraints you have and how complicated your domain model is, there could be potentially hundreds of context switches for a single score calculation. It might be worth prototyping though.
[1]: https://www.tiobe.com/tiobe-index/ruby/
There isn’t really a much more productive web dev setup than Rails + your favorite LLM tool. Will take time to earn Gen Z back to Rails though and away from Python/TS or Go/Rust.
It's not the fastest language, but it's faster than a lot of dynamic languages. Other than the lack of native types, you can manage pretty large rails apps easily. Chime, Stripe, and Shopify all use RoR and they all have very complex, high-scale financial systems.
The strength of your tool is limited to the person who uses the tool.
Btw Stripe uses Ruby, but not Rails.
Such as?
IME Ruby consistently fall behind, often way behind, nearly all popular languages in "benchmark battles".
I haven’t seen a direct comparisons but I wouldn’t be surprised if Truffle Ruby was already faster than either elixir, erlang or php for single threaded CPU bound tasks too.
Of course that’s still way behind other languages but it’s still surprisingly good.
Isn't that moving the goal post a lot?
We wen't from 'faster than a lot of others' to 'competing for worst in class'.
I'm not trying to be facetious, I'm curious as I often read "X is really fast" where X is a functional/OOP language that nearly always ends up being some combination of slow and with huge memory overhead. Even then, most Schemes (or Lisps in general) are faster.
Being faster single threaded against runtimes that are built specifically for multithreaded, distributed workloads is also perhaps not a fair comparison, esp. when both runtimes are heavily used to write webservers. And again, Erlang (et al) come out faster even in those benchmarks.
Is TruffleRuby production (eg. Rails) ready? If so, is it that much faster?
I remember when the infamous "Truffle beats all Ruby implementations"-article came out that a lot of Rubyists were shooting it down, however this was several years ago by now.
Originally you just asked, "such as" [which dynamic language ruby is faster than?] Implying ruby is slower than every other dynamic language, which is not the case.
JRuby is faster than MRI Ruby for some Rails workloads and very much production ready.
Truffle Ruby is said to be about 97% compatible with MRI on the rubyspec but IMHO isn't production ready for Rails yet. It does work well enough for many stand alone non-rails tasks though and could potentially be used for running Sidekiq jobs.
The reason to mention the alternative ruby runtimes is to show that there's nothing about the language that means it can't improve in performance (within limits).
Whilst it's true that ruby is slower than Common Lisp or Scheme, ruby is still improving and the gap is going to greatly reduce, which is good news for those of us that enjoy using it.
Perl, Tcl, Smalltalk etc are basically non-existant from where I'm from, so they didn't occur to me.
Perhaps I'm projecting a lot here. I have worked a lot in high performance systems and am often triggered by claims of performance, eg. 'X is faster than C' when this is 99.9% of the times false by two orders of magnitude. This didn't happen here.
Thank you for taking the time to answer.
Oh not at all, no I didn't think that. I'm enjoying the conversation.
It's interesting that you mention Smalltalk as I believe that some of the JIT ideas we're seeing in YJIT are borrowed from there.
As for all the "faster than C" talk here is very specific to ruby (or JIT'd) runtimes and overheads only in that context.
I think it gets mentioned because it seems so counter intuitive at first. It's not to imply C isn't orders of magnitude faster in general.
Along with the new out of the box features of Rails 8, the work on Ruby infrastructure is making it an exciting technology to work with again (IMHO).
I never worked at Twitter, but based on the timeline it seems very likely they were running on the old Ruby 1.8.x line, which was a pure AST interpreter. The VM is now a bytecode interpreter that has been optimized over the intervening years. The GC is considerably more robust. There's a very fast JIT compiler included. Many libraries have been optimized and bugs squashed.
If your concern is Rails, please note that also has seen ongoing development and is more performant, more robust, and I'd say better architected. I'm not even sure it was thread-safe when Twitter was running on it.
You don't have to like Ruby or Rails, but you're really working off old data. I'm sure there's a breaking point in there somewhere, but I very much doubt most apps will hit in before going bust.
But C?
All my new projects will be Rails. (What about projects that don’t lend themselves to Rails? I don’t take on such projects ;)
Well, because it isn't.
Crystal is an ergonomic language, too, looking a lot like Ruby even beyond a cursory glance. What Ruby has, like any longstanding language, is a large number of packages to help development along, so languages like Crystal have to do a lot of catching up. Looking at the large number of abandoned gems though, I'm not sure it's that big a difference, the most important ones could be targeted.
I'm not sure that has any relevance when compared with Python or JS or Go though, they seem to have thriving ecosystems too - is Rails really that much better than the alternatives? I wouldn't know but I highly doubt it.
No, it never intended to be a replacement to Ruby. They share similarities in syntax, the same way Elixirs syntax reminds you of Ruby.
If you want faster Ruby, check out MRuby (not always an drop-in replacement though).
I enjoy Ruby but activerecord is a mess and the language is slow and lacks real time functionality.
I am pretty sure this is the basis of the LuaJIT FFI: https://luajit.org/ext_ffi.html
I think LuaJIT's FFI is very fast for this reason.
I feel like I'm not getting something. Isn't ruby a pretty slow language? If I was dipping into native I'd want to do as much in native as possible.
In one major release, there was a bunch of Java code responsible for handling some UI element activities. It was found to be a bottleneck, and rewritten in C code for the next major release.
Then the JIT became properly useful, and the FFI overhead was more than the difference between the hand-tuned C code and what the JIT would spit out on its own. So in the next major release, they rolled back to the all-Java implementation.
Java had a fairly reasonably fast FFI for that generation of programming language, but they swapped for a better one a few releases after that. And by then I wasn't doing a lot of Java UI code so I had stopped paying attention. But around the same time they were also making a cleaner interface between the platform-specific and the general Java code for UI, so I'm not entirely sure how that played out.
But that's exactly the sort of see-sawing you need to at least keep an eye out for when doing this sort of work. Would you be better off waiting a couple milestones and saving yourself a bunch of hand-tuning work, or do you need it right now for political or technical reasons?
While this is suboptimal when doing one shot execution, when an application is long lived, mostly desktop or server workloads, this work pays off versus the overall application.
For example, Dalvik had a pretty lame JIT, thus it was faster calling into C for math functions, eventually with ART this was no longer needed, JIT could outperform the cost of calling into C.
https://developer.android.com/reference/android/util/FloatMa...
Most recently I did hand-looped matrix math and this ratio bore out.
I used gfortran, gcc, and python3.
I would also add that modern Fortran looks quite sweet, the punched card FORTRAN is long gone, and folks should spend more time learning it instead of reaching out to Python.
TL;dr - JIT rules.
If the FFI call can be made faster, maybe you can keep the loop in Ruby.
Of course that is attractive to people writing an application in Ruby.
That's how I interpret keeping as much code Ruby as possible.
Nobody in their right mind wants to add additional application-specific jigs written in C just to use some C piece.
Once you start doing that, why even have FFI; you can just create a module.
One attractive point about FFI is that you can take some C library and use it in a higher level language without writing a line of C.
When we optimize Ruby for performance we debate how to eliminate X thousand heap allocations. When people in Rust optimize for performance, they're talking about how to hint to the compiler that the loop would benefit from SIMD.
Two different communities, two wildly different bars for "fast." Ruby is plenty performant. I had a python developer tell me they were excited for the JIT work in Ruby as they hoped that Python could adopt something similar. For us the one to beat (or come closer to) would be Node.js. We are still slower then them (lots of browser companies spent a LOT of time optimizing javascript JIT), but I feel for the relative size of the communities Ruby is able to punch above its weight. I also feel that we should be celebrating tides that raise all ships. Not everything can (or should be) written in C.
I personally celebrate any language getting faster, especially when the people doing it share as widely and are as good of a communicator as Aaron.
The claim that Ruby YJIT beats this is not supported by the data to put it mildly:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/... (scroll down on each submission and you will see it uses YJIT)
(and Go is not a paragon of absolute performance either)
The whitepapers that inspired Ruby's JIT was first tested against a saner subset of JS, and shown to have some promising performance improvement. The better language/JIT compatibility is why the current Ruby JIT is actually showing performance improvements over the previous more traditionally designed JIT attempts.
JS can get insanely fast when it's written like low level code that can take advantage of its much more advanced compiling abilities; like when it's used as a WASM target with machine generated code. But humans tend to not write JS that way.
Agreed about Go as well, it tends to be on the slow side for compiled languages. I called it out not as an example of a fast language, but because it's typical performance is well known and approximately where the upper bound of how fast Ruby can get.
The author was eventually hired to develop YJIT.
What does this mean? Any JIT can make a tight loop of math and arrays run well but that doesn't mean a typical program runs well.
So basically, writing code similarly to a statically typed compiled language.
https://programming-language-benchmarks.vercel.app/amp/pytho...
Is there somewhere with benchmarks that supports the idea that Ruby is faster than Python?
Tcl had it’s web moment during the first dot com era within AOLserver.
However exactly because of the experience writing Tcl extensions all the time for performance, since 2003 I no longer use programming languages without JIT/AOT other than for scripting taks, or when the decision is external.
The founders at our startup went on to create OutSystems, with many of the learnings, but using .NET instead, after we were given access to .NET during its "Only for MSFT partners eyes" early state.
This is sometimes referred to as "self-hosting", and browsers do it a lot by moving things into privileged JavaScript that might normally have been written in C/C++. Surprisingly large amounts of the standard library end up being not written in native code.
But specifically with regards to library functions, like the other commentator said, losing out on in lining sucks, and crossing between JS and Native code can be pretty expensive, so even with things like sorting an array it can be better to do it in js to avoid the overhead... Eg esp in cases where you can provide a callback as your comparator, which is js, and thus you have to cross back into js for every element
So it's a balancing game, and the various engines have gone back and forth on which functions are implemented in which language over time
If Ruby YJIT is starting to become a measurable factor (after all, it was slower than other, purely interpreted, languages until recently), then the same rule as above will become more relevant.
I suppose if you could statically determine all of the bindings at boot up you could write a stub and insert into the method table. But, that still would happen at runtime, making it JIT. And it wouldn't be able to adapt to the types flowing through the system, so it'd have to be conservative in what it accepts or what it optimizes, which is what libffi already does today. The AOT approach is to write a native extension.
But the C code is still going to be waaay faster than the Ruby code even with YJIT. That seems like an odd reason to avoid C. (I think there are other good reasons though.)
I can't find it but I remember seeing a talk where they showed examples of Ruby + YJIT hitting the same speed and in some cases a bit more than C. The downside was though that it required to some warmup time.
In libffi you built up descriptor objects for functions. These are run-time data structures which indicate the arguments and return value types.
When making a FFI call, you must pass in an array of pointers to the values you want to pass, and the descriptor.
Inside libffi there is likely a loop which walks the loop of values, while at the same time traversing the descriptor, and places those values onto the stack in the right way according to the type indicated in the descriptor. When the function is done, it then pulls out the return according to its type. It's probably switching on type for all these pieces.
Even if the libffi call mechanism were JITted, the preparation of the argument array for it would still be slow. It's less direct than a FFI jit that directly accesses the arguments without going through an intermediate array.
FFI JIT code will directly take the argument values, convert them from the Ruby (or whatever) type to the C type, and stick it into the right place on the stack or register, and do that with inline code for each value. Then call the function, and convert the return value to the Ruby type. Basically as if you wrote extension code by hand:
If there is type inference, the conversion code can skip type checks. If we have assurance that arg1 is a Ruby string, we can use an unsafe, faster version of the RubyToCString function.The JIT code doesn't have to reflect over anything other than at worst the Ruby types. It doesn't have to have any array or list related to the arguments. It knows which C types are being converted to and form, and that is hard-coded: there is no data structure describing the C side that has to be walked at run0-time.
(the tramp.c linked in a sibling comment is for "reverse-FFI", i.e. exposing some dynamic custom operation as a function pointer; and its JITting there amounts to a total of 3 instructions to call into precompiled code)
https://github.com/libffi/libffi/blob/master/src/tramp.c
Then just execute the c program with your flags or data in the terminal using ruby and viola, ruby can run C code.
How many programs have an https client in them because they need to make one request and didn’t know they could use curl?
> Shelling out to psql
I would recommend using a Postgres connection library, because that's how Postgres is designed.
Note that ongoing communication can still work with a multi-process stdin/stdout design. This is how email protocols work. So someone could design a SQL client that works this way.
I have absolutely written batch import scripts which simply spawn psql for every query, with great results.
> in almost the exact same cases you'd use a shared library
That's the thing. Libraries are an entangling relationship (literally in your binary). Programs in contrast have a clean, modular interface.
So for example you can choose to load the imagemagick library, or you can spawn imagemagick. Which one is better depends, but most often you don't need the library.
Here is a list of examples I have seen solved with a library that were completely unnecessary:
- identify the format of an image.
- zip some data
- post usage analytics to a web server at startup
- diff two pieces of data
- convert math symbols to images
- convert x format to y
I have even seen discourse online that suggests that if you are serious about AI your web stack needs to be in python - as if you can't spawn a process for an AI job.
I'm with you here.
> ...many people do not understand Unix processes and don’t realizing how rare it is to need bindings, ffi, and many libraries
But, this is a much stronger claim.
I can't tell if you're making a meta point or addressing something in the Ruby ecosystem. I mentioned database library bindings because that's far and away the most common usage in the Ruby ecosystem, particularly because of its frequent adoption for web applications.
The author is advocating for not using native code at all if you can avoid it. Keep as much code in Ruby as you can and let the JIT optimize it. But, if you do need bindings, it'd be great if you didn't have to write a native extension. There are a lot of ways to shoot yourself in the foot and they complicate the build pipeline. However, historically, FFI has been much slower than writing a native extension. The point of this post is to explore a way to speed up FFI in the cases where you need it.
It needs to be taken on faith that the author is doing this work because he either has performance sensitive code or needs to work with a library. Spawning a new process in that case is going to be slower than any of the options explored in the post. Writing and distributing a C/C++/Zig/Rust/Go application as part of a Rails app is a big hammer to swing and complicates deployments (a big part of the reason to move away from native extensions). It's possible the author is just complicating things unnecessarily, but he's demonstrated a clear mastery of the technologies involved so I'm willing to give him the benefit of the doubt.
A frequent critique of Ruby is that it's slow. Spawning processes for operations on the hot path isn't going to help that perception. I agree there are cases where shelling out makes sense. E.g., spawning out to use ImageMagick has proven to be a better approach than using bindings when I want to make image thumbnails. But, those are typically handled in asynchronous jobs. I'm all for anything we can do to speed up the hot path and it's remarkable how much performance was easily picked up.
> Now, usually I steer clear of FFI, and to be honest the reason is simply that it doesn’t provide the same performance as a native extension.
I usually avoid it, or in particular, gems that use it, because compilation can be such a pain. I've found it easier to build it myself and cut out the middleman of Rubygems/bundler.
Ruby has a first party external type definition format (RBS) as well as third-party typecheckers that check ruby against RBS definitions.
There is probably more use of the older, all third-party typing solution (Sorbet) though.
Someone recommended me this, so I might even spread the word further https://github.com/soutaro/rbs-inline?tab=readme-ov-file#rbs...
having to add annotations in a separate header file is simply too high friction to get widespread adoption.
edit: I am quite fond of `case ::Ractor::receive; when SomeClass then …; when SomeOtherClass then …; end` as the main pattern for my Ractors though :)
also once you get into jit compilation you can do some nice optimisations if you can treat a variable type as statically known rather than dynamic.
and finally, even if you're not writing python at scale it can be very nice to use the type annotations to document your function parameters.
This is something I hadn't considered. Thanks for mentioning it :)
It definitely isn't at the level of Typescript adoption, even relatively speaking. And it's more clunky than Typescript. But it works well enough to be valuable.
There is also work to settle on an inline form of RBS so I could see it taking over from Sorbet annotations in the future.
If you're looking for static analysis in general, please note that there are mature tools available. Rubocop¹ is probably the most popular and allows for linting and code formatting. Brakeman² is a vulnerability scanner for Rails. Sorbet³ is a static type checker.
The tooling is there if you want to try things out. But, if you want a statically typed language then that's a debate that's been going since the dawn of programming language design. I doubt it's going to get resolved in this thread.
¹ - https://github.com/rubocop/rubocop
² - https://brakemanscanner.org/
³ - https://sorbet.org/