Ten Years of Erlang
> I wanted to take a bit of time to reflect over most of that decade. In this post, I’ll cover a few things such as hype phases and how this related to Erlang, the ladder of ideas within the language and how that can impact adoption, what changed in my ten years here, and I’ll finish up with what I think Erlang still has to bring to the programming community at large.
> This started my whole story with Erlang, which still goes on today. Joe’s book was approachable, the same way he was. He could explain like no other what the principles of Erlang are, what led to its design, and the core approach to fault tolerance that it embodies. It’s one of the few language-specific books that is not content with getting you to write code, but lets you understand why you should write it that way. The language didn’t just have features because they were cool, it had features because they were needed for fault tolerance. What Joe told you applied anywhere else.
> Live views share functionality with the regular server-side HTML views you are used to writing – you write some template code, and your render function generates HTML for the client. That said, live views go further by enabling stateful views which support bidrectional communication between the client and server. Live views react to events from the client, as well as events happening on the server, and push their rendered updates back to the browser. In effect, we share similar interaction and rendering models with many client-side libraries that exist today, such as React and Ember.
Now in beta.
Achieving 100k connections per second with Elixir
> By analyzing the initial test results, proposing a theory, and confirming it by measuring against modified software, we were able to find two bottlenecks on the way to getting to 100k connections per second with Elixir and Ranch. The combination of multiple connection supervisors in Ranch and multiple listener sockets in the Linux kernel is necessary to achieve full utilization of the 36-core machine under the target workload.
The Curious Case of BEAM CPU Usage
> Turns out, busy waiting in BEAM is an optimization that ensures maximum responsiveness. In essence, when waiting for a certain event, the virtual machine first enters a CPU-intensive tight loop, where it continuously checks to see if the event in question has occurred.
> In our test, we found that BEAM’s busy wait settings do have a significant impact on CPU usage. The highest impact was observed on the instance with the most available CPU capacity. At the same time, we did not observe any meaningful difference in performance between VMs with busy waiting enabled and disabled.
What efficient pattern matching looks like at the bytecode level
> My theory is that this is how BEAM makes pattern matching efficient in general: it finds prefixes that can be matched against with select_val. As a corollary if you don’t see a select_val in your bytecode, then you’re not getting the most-efficient pattern matching branching as the pattern checks have to be implemented as separate bytecode instructions, and it might make sense to rearrange clauses that don’t affect semantics to see if a select_val pops out in generated bytecode.
FEZ - an fsharp to core erlang experiment
> Fez is an early doors experiment in compiling fsharp to BEAM compatible core erlang. The primary aim is to implement enough of the language to evaluate what how well an ML type of language could become a practical language for writing code to be run on the beam.
Choosing Elixir for the Code, not the Performance
I might summarize this as performance is about more than for loops per second.
Elixir v1.5 released
> Elixir v1.5 brings many improvements to the developer experience and quality of life. As we will see, many of those are powered by the latest Erlang/OTP 20.
How Discord Scaled Elixir
A microsecond here, a microsecond there, it all adds up.
Elixir's Secret Weapon
> I’m talking about the special form with.
Build Your Own Code Poster With Elixir
> Was Elixir the tool for this project? Probably not. Was it still a fun project and a good learning experience? Absolutely.
From Micro-Services to Monoliths
Some of the details here are environment specific, but they made a useful observation. The loose coupling of components lets problematic pieces be swapped out, and they’ve stepped off the upgrade treadmill being driven by outside interests.
Reducing the maximum latency of a bound buffer
> Reading the Pusher articles made me wonder how well would the Elixir implementation perform. After all, the underlying Erlang VM (BEAM) has been built with low and predictable latency in mind, so coupled with other properties such as fault-tolerance, massive concurrency, scalability, support for distributed systems, it seems like a compelling choice for the job.
Stuff Goes Bad: Erlang in Anger
> Because the system doesn’t collapse the first time something bad touches it, Erlang/OTP also allows you to be a doctor. You can go in the system, pry it open right there in production, carefully observe everything inside as it runs, and even try to fix it interactively.
> This book is not for beginners.
How Discord handles push request bursts of over a million per minute with Elixir’s GenStage
Buffering, dropping, and back pressure, all in good measure.
How Supervisors Work
> In Erlang (and Elixir) supervisors are processes which manage child processes and restart them when they crash.
Elixir and IO Lists in Phoenix
Great observation about web templates having lots of static parts. Concatenating static strings with dynamic strings creates lots more dynamic strings which need to collected or freed, but making a list of strings preempts all that work.