MMS • Paul Biggar
Biggar: I used to find programming enjoyable, back when I was in college, and I was writing algorithms and little Google puzzles and games. Programming was just a really enjoyable activity that I would do all the time and just have a lot of fun with. At some point, I started working on cloud development. It’s been 15 or something years, and it’s not that I don’t find programming fun anymore, but it’s just sometimes fun. It used to be always fun, and now it’s just like, sometimes a little bit, and then most of the time I’m doing shit. That’s for the cloud. That’s for servers. That’s for DevOps and that sort of thing. I don’t find any of that stuff fun. I’m like reflecting on this path that we took from code used to be fun into code is no longer fun. If you think about a simple program that you used to write, maybe a game, just like a simple thing that you’d write in the command line, you’d probably write it in Python, let’s say, hypothetically, you would. The first thing you need to do to get it into the cloud is to make it persistent, and so you’re going to add some SQL. When I say you add some SQL, you’re not just adding some SQL, you’re fundamentally changing your program from storing things in memory to storing things in a database.
Let’s just simplify things a little bit. You’re storing things in SQL. Then to make that SQL easier, you’re going to add an ORM, avoid SQL injection and all that sort of thing, make things a little bit better. Then you need to put it on the internet. To put it on the internet, you need Postgres. You need to make it a little bit more reliable. You need to use a real database there. To make it more reliable, you need the real Postgres. It’s not just you’re running Postgres on a server, you’re running a primary-secondary setup with backups and a connection pooler and that sort of thing. You also need to get your servers, your Python program, this process, this script that’s running on your machine, you need to get that onto the internet as well. That one is going to involve a container. You’re going to have pip. You’re going to have VM, all that sort of thing. Then you’re going to add continuous delivery. In order to have that continuous delivery, you need a CI pipeline, Terraform, all this sort of thing that we use to cloudify our stuff. Then we’re going to make it scale. To scale, of course, Kubernetes. That’s how you make things scale. Then once you have Kubernetes, you have all this shit. I’m sure we’ve all seen the Cloud Native Computing Foundation landscape. This is the 2023 version. It’s terrifying.
Out of Control Complexity
You can see how we just slowly ratcheted up the complexity by the requirements of like, whatever our simple program is, every step along the way, we added stuff. For everything that we added, we made it more complex, and we made it a lot less fun. We’ve done the same in every little part of our industry. Consider DevOps, once upon a time, there was computers. Then we made VMs. Then VMs weren’t great, so we had Docker. Everyone was happy when Docker came out about 10 years ago. Then we realized that Docker wasn’t enough of an abstraction and we needed docker-compose to run microservices on our machine. It didn’t solve the IDE problem, so we needed devcontainers to run on top of it. Then, of course, it didn’t solve the production scale problems, so we added Kubernetes. Then Kubernetes wasn’t enough, we needed Helm. Then we did the exact same thing on the frontend. Once upon a time, we wrote HTML on servers, and we returned them. That’s what we ran in the browser. These days, we write single-page applications. Then single-page applications needed more stuff. They needed reactive frameworks, bundlers, minimizers. I haven’t even listed TypeScript and that whole ecosystem.
There’s just an awful lot of complexity that we’ve ratcheted up in the frontend space. Where we end up with this is specialization. We didn’t solve this problem by reducing the complexity. We didn’t look at all the complexity in frontend, and DevOps, and cloud, and all these other things and say, that’s way too much. We need to slow this down. What we said is, we’re going to specialize as engineers. There was backend engineers. There used to be an engineer, and that became the backend engineer. Then we had frontend engineers, and then we had full-stack engineers who know SQL and CSS. Then we added DevOps engineers, and platform engineers, and security engineers. There’s growth engineers. There’s data engineers. Now there’s even prompt engineers. I’m not saying any of this is bad. I think it is good. If you’ve looked at prompt engineering, it is hard. There’s an awful lot to maintain. There’s a million new papers that come out every week. You need a discipline. I don’t know if it’s a discipline yet, but you’re going to need something like it. All of these things have a culture, they have a lingo, they have tools, they have specialization, we needed the specialization to solve the complexity.
Along with this, we ended up with large teams. If you need all this specialization, then you need all of them in your team. We have 100-person organizations that are doing the jobs of ostensibly 12 people. Then you have a whole menagerie of like management to take care of. You have engineering managers and tech leads and architects, just trying to drive it along. Of course, product managers, in fact an entire product organization and product discipline, whose role is to take all this complexity that we’ve created for ourselves, and somehow get it over the line so that we can make some money. The complexity that we have is out of control. We’re going to talk about accidental complexity and essential complexity. I like this phrase, and I’m going to focus on this phrase, “Simple, understandable tools that interact well with our existing tools.” This is what we do as an industry. This is how we talk about solving complexity. You have simple tools that compose well, they’re modular. There’s a million keynotes about this. There’s even programming languages that were invented to solve this. The Unix philosophy, of course, “Do one thing, and do it well.” This is almost like a foundation of our industry. The unfortunate thing is that it’s complete bullshit. This idea of simple, understandable tools that interact well with existing tools, is the problem that got us here. As I talked us through all that stuff earlier about the cloud and the programs and DevOps, we’re adding a new, simple, understandable tool to our existing stack. Suddenly, we have dozens, hundreds of simple, understandable tools that all need to interact with each other, and that we need to be experts in in order to be able to make it work.
It’s inherent in the incentives of our industry. Let’s say, hypothetically that you have a service, your service is a little less reliable than it needs to be. Users are complaining. It’s got downtime, people are getting paged. You, as an engineer, you sit down, maybe another engineer, two, three of you, and you sit down and you say I’m going to fix this. We’re going to take a couple of off-the-shelf tools, and we’re going to increase the reliability. We’re going to rewrite this service in Go and Kafka, it’s going to take two weeks. You do a spike and you take it to your boss and you say, we’ve done a spike, we think we’ve got this under control. Great. Take another two weeks to get into production, another two weeks to figure out the bugs, another two weeks to get the whole thing out. Two months done, end-to-end, this is a successful project in our industry. It works. Reliability is up from 95 to 99.99. Costs are down. No one is getting paged. Everyone is happy. What you’ve done is you’ve introduced two new tools into your ecosystem, Go and Kafka. It’s not about Go and Kafka, it’s about the incentives of our industry to add things to our stacks, and to reuse the tools that other people have done. This is what we’re supposed to do. Someone else wrote Kafka, so we should use Kafka for the thing, because they’ve put thousands of engineering hours into making this reliable system, so we need to bring that in. Now that’s a tool that we need to manage, and a tool that we need to understand.
It’s the same thing with startups. Startups are incentivized to create. You’ve got some idea for a new platform engineering tool, a new observability tool, a new logging tool, something like that, even a new SaaS. Whatever company you’re working at, builds a thing that has to interact with the existing ecosystem. It makes sense to take this tool, to bring it, look at what works in the ecosystem and to extend the existing ecosystem with a new tool. That’s what the incentives of our industry is. The unfortunate side is that this do one problem and do it well, or solve one problem and solve it well, is the problem that we end up with. It should have been obvious from the start, because this phrase, this do one thing, and do it well, this came from Unix. If we look at Unix, we don’t use Unix. Every server on the internet is Unix, but we use like POSIX, and the Linux kernel, and GNU tools, and that sort of thing to run HTTP servers, which don’t do anything in Unix. If you happen to know Awk, or Sed, or find, or any of the GNU tools, you might know a couple of flags, you might know a little bit, but if you were presented with a program in Awk, you’d freak it, and so would I. It should have been a clue to us, perhaps, that we aren’t writing HTTP servers that pipe our input into bash or bash scripts, and like are piping things in. This idea of do one thing and do it well, was never true at least in the modern web driven era.
Batteries Included (Holistic Tooling)
You can look at this idea of batteries included. This phrase originated with Python in the ’90s. The idea was, we’re going to pull things in to the standard library that weren’t in the standard library before. We’re going to have, instead of just like a minimal string parsing like we had in C, we’re going to have more than that. We’re going to have an XML parser, and a HTTP server, and a HTTP client, and all that sort of thing, unit testing, it’s going to be all part of the standard library. That is what made Python successful at the start by having this idea of batteries included. A more modern example of this is Rust, where Rust is like this beautiful systems programming language that people love, partially because it’s very hard, but partially because the tooling is really good. If you were writing systems tools in C and C++, then you had all these separate tools that you have to write glue code in between. You had to write makefiles yourself, or you had to write automake or autoconf. There were complex tools around doing this thing. You had a compiler, and then you had a separate build system. Then there was no package library at all, still is no package library at all. Cross compiling is just impossible.
If you look at what Rust did is, they have this tool cargo, and cargo does everything. It drew a big box around all the things that you want it to do as part of building your systems programming application. It said, we’re going to do all of them. It made them really good. It removed the seams between all of them, so you don’t have to write a makefile anymore. It also made it so that the package manager was possible, and there’s all these crates in the ecosystem that’s possible because we’ve all agreed on the one thing, but it also made things like cross compile impossible. That used to be something that was just like, absolutely not, something that you just could not do. Now it’s something that’s like, it’s a couple flags, you might have to read a doc for it. It became very easy because they were building these holistic tools. That’s what I’m suggesting we should be doing. We should be building holistic tools. We should draw larger boundaries around the problems that we’re solving, not just do one thing and do it well. Instead build larger tools that solve bigger problems, and that are able to remove the seams and the connections between those problems. Instead of building cloud applications in a script that we then combine with an interpreter to run in a process to run in an operating system that we put in a container that we run in an orchestration framework that we then run in a VM, that runs in a hypervisor, we should be raising the level of abstraction. We should be developing above the cloud.
I make Darklang. I’m Paul Biggar. I’m the CEO/CTO of Darklang, because we’re a little tiny company and you can be both. Before this I made CircleCI. Before that I did a PhD in compilers and static analysis. This idea of, how do we get code into production? How do we build tools for developers? This is what I’ve been doing my whole career. I’ve been working on Darklang for about six years now. It is ok. It solved some things. It’s not quite good yet, but we’re still working on it. We’ll get there.
The fundamental realization of Darklang and of this idea of developing above the cloud, is that what you do in programming languages, what you do when you’re building an application is that you receive, process, store, and query, and send data. This is just a transcription of what is involved in computation. This is the actual essential complexity of our application. If you’re building something that you’re putting in a cloud, you might be building containers, and writing Terraform, but fundamentally, what the application is doing is it’s just doing these things: receiving and sending data to the users. The way that an application developer wants to think about programs is that the user presses send message and the message delivers. Maybe they have to deal with error handling. Obviously, there’s going to be some UI to decide which user are we sending this to, and what message. This is the level of abstraction that we should be thinking about, instead of the cloud native computing landscape.
Infra, Deployment, and Tooling Complexity
The types of complexity that I want to remove are broadly categorized in these three areas, the infra complexity, deployment complexity, and tooling complexity. The infra complexity are all the things we’ve been talking about so far, Kubernetes provisioning, cold starts, containers, artifact registry, this entire thing. What’s really interesting about this, I fetched it from the internet to put it in this slide, and I had last fetched it in 2019. In 2019, when I put it in to my deck, if I squinted, I could see what each of the boxes said. There has been so much growth in the complexity of this thing that I can barely even make out the icons anymore. That’s the infra complexity. The deployment complexity that I’m talking about is the whole process of getting code from the developer’s computer into production, running on a cloud server somewhere that your users can interact with it. The key aspect of this is exemplified by this quote, “Speed of developer iteration is the single most important factor in how quickly a technology company can move.” If you’ve worked at a company that has a 1-hour deployment process, you will have been frustrated if you previously came from somewhere with a 10-minute or a 5-minute deployment process. Of course, there’s people still working with 2-week or 2-month absurdities, but let’s ignore them for a moment. The deployment process is roughly something like this. You make the change. It runs through a CI pipeline. Then of course, eventually it gets deployed on to Kubernetes, and then it gets enabled with a feature flag. Then, at the end, you remove the feature flag. Then that change runs the whole way through again. This deployment complexity is something that we’re looking to remove as well. Then, of course, developer environments and tooling. If you’ve built devcontainers, or you’ve tried to understand the difference between the Postgres that you have in production and the Postgres that you have running on your machine, or tried to synchronize a bunch of developers on the same tools, there’s an awful lot of complexity there.
I’m going to give you a quick demo of what our solution is. This is the Darklang editor, and this is us creating a new HTTP handler. We put in the method, we write our path, and then we write the code. The code, we’re just going to write Hello World. That’s literally all it takes to get working code in production. If you go to that URL right now, it’s paul-qcon.builtwithdark.com/hello. That is a working thing, that got set up in 1 second. This is the level of complexity that people want to deal with. It’s still a low level of complexity. You could be doing higher level like modeling with automatic serialization and communication between, and that sort of thing. From a cloud perspective, there’s no servers to create, there was no servers to even think about. It’s not even like, we have a container and a HTTP server, and it starts up and it waits for the health checks to pass and then it runs. There’s not even the concept of a server. There’s just the concept of a HTTP route which is the essential complexity of how we build cloud-based applications.
I want to take this a little bit further. We’re going to do the same thing, and we’re going to store something in a database. We’re going to store the visits that we get in a database. This is editing live in production. If you were to hit this right now, you would probably find something that was not quite fully working as we made the change. What’s interesting is that all parts of these are live and edited in production. When we click date.now, that is running the date in this trace that is being built up. In order to save things in a database, we create the database right there. We’re not provisioning databases and setting up Postgres, we’re just writing it to schema. Then the schema forms a variable that we’re going to use in our program. I’m using these sorts of words in a weird way, because, usually, that’s not how we think about databases. We think about databases as like things that get provisioned and connected to, as opposed to things that are in a program. You can see, in the 30 seconds that this thing has run, we have added data from our request, our request that was made several minutes ago, we were able to build our computation on top of that. We stored it in our database. We see actual values of our program, the actual executed values of our program are shown along the right. Any time that you’re writing something, you can see the actual runtime value of that based on the trace that came when the user was using your application. I’m going to talk a little bit more about this. The general idea here is you can bring all of these things together and have this really fast, really simple deployment.
How is thing possible? How does it address the complexity? The main thing for how it addresses the complexity is this phrase that we use, invisible infrastructure. You don’t have to think about anything that’s involved in the cloud at all. There’s no containers. There’s none of the things that I listed. The infrastructure is immediate. There’s zero provisioning time for any part of the infrastructure. There’s no configuring. There’s no infra as code. It’s all part of this live system. That’s what we feel that takes to be able to do this thing. The application creation is reduced to just the abstraction that we actually think about: there’s queues, there’s crons, there’s datastores. These are the things that we want to use in our application. These are the things that you actually are given to use them, and no access to underlying stuff, but also, no need to think about the underlying stuff.
Jess Frazelle coined deployless, about our stuff. Deployless is a nice word to describe how we think about this, because every change we made, happened instantly. It went live into production straightaway. I haven’t talked about what it took to make that safe. We have a big blog post about that. I’m not really going to focus on that here. What I’m going to focus on is this CI/CD pipeline that you see here that has 34 steps, and 2 separate loops in it as well for rerunning various commits, and also for rerunning the whole thing, was reduced to just this, the essential complexity being, make the actual change behind a feature flag. Then the essential complexity of the change is that you actually need to enable it for your users. You need to decide how to enable it for your users for like 10%, or for people who live in Nova Scotia, or whatever feature flagging essential complexity of deciding how to put this out into the world. It’s interesting to reduce the deployment complexity down to this, because it makes you think about, what about all that other stuff the CI/CD pipeline does? What is it about bringing stuff into production that requires all of this? The answer is that it’s risk. Because back in the day, you used to merge stuff into production by putting it on the server and now it’s in production. Now users can use it as soon as it’s installed on the server and the server starts up. That actually turned out to be not so good, because that was risky. We don’t know how it’s going to work. We don’t know how it’s going to behave. We have all had downtime that has been caused by this. We moved the actual enabling of the thing to feature flags. Then, what’s left to do when we put this on the server? The answer is just risk. The risk is still there. The value of that risk has gone, so it’s just expensive risk.
What you get from this, obviously, changes are instant, and there’s no containers, there’s no CI/CD. There’s also no branches, and that’s interesting. This idea that all of the flags, all of the if statements that we have throughout our code, Git branches are a form of if statement. It’s a place that you can do your stuff. Our environments, staging environment, developer environment, production environment, those are all if statements, things that we want to do off the side. The same with blue-green deployments. I want to bring this over here to test that it works. What we’ve also done is we’ve simplified all of these concepts into feature flags, because those are: blue-green deployment is the feature flag of the server deployment world. Git branches are the feature flags of the version control world. When we have this holistic system, where we collapse all the concepts down to one, we can also remove all these different concepts that were separately, independently created in all these different tools into one single concept. Finally, the developer environment, you can see here that there’s no Git in it, and there’s no installing cloud emulators. It just simplifies it. Because you’re working in the editor, there’s nothing to set up. If you’re switching teams, or you want to go look in some code base that you haven’t dealt with, you don’t have hours or days of setup, it’s just there.
The interesting thing about removing all these things, is that it’s not just that we get to simplify the CI/CD pipeline, or that we get to remove all of this cloudy stuff, we also get to remove things that we don’t even really think about as part of our system. Shipping binaries around isn’t something that we necessarily think of when we build applications, but at some point, those are compiled or something like that binary, that needs to get shipped to some container registry. When it runs, there’s a file system, but the file system is a weird one, because the file system doesn’t really exist as far as our users are concerned. We’re running Unix file systems. If we’re running them on Kubernetes, we have them read only because, again, this is the thing that only provides risk and doesn’t actually bring something to the table. Essentially, it’s like a bucket of files that are used by our HTTP server. What is the actual file system doing? Also, what is the operating system doing? Why are we thinking about our operating system when the thing that’s exposed to users is a HTTP server? By bringing all this together and bringing a runtime that’s developing above the cloud, we don’t have file systems, we don’t have operating systems and block storage, and all these things that are part of that.
Easy Wins at the Boundaries
I talked earlier about Rust, and about how it was able to remove some of the seams of these systems, so like a makefile being a seam. That’s one benefit of having these holistic systems, the seams between things are removed, the deployment is removed, the compile step, the cloud is removed. What you also get is when you control two tools, and you merge them into one holistic tool, as well as removing the seam, you get to innovate within that space. When you do three tools, which is what we’re doing, editor, cloud, and programming language, there’s an awful lot of innovation that you can do that is really low hanging fruit that no one has looked at before, because just no one has been able to do that. The first one that I love, and this is the best thing about Dark, everyone who uses Dark loves this thing. Whenever I use Dark, I love it. Whenever I don’t use Dark because our system is written in F#, and that doesn’t have it, I spend a lot of time in VS Code and all this sort of thing. I don’t have this. Trace-driven development is this idea that when requests are made, we store the trace, so when the user makes a request to our system, we automatically store the trace. We store the results of any impure functions, such as HTTP requests, and database calls and that sort of thing. We store them in a trace that can be replayed back later in the editor.
You can see an example of it here, where, in the editor, we highlight the code that was active in this part of the trace. We show the result. We show the value of the variable that our cursor is on, on the left-hand side. You can see this is a live debugging system, when you compare it to the existing ways of doing debugging. There are two schools of thought around debugging. One is the printf style debugging in Go. You add printfs. You make the compile loop fast enough that you can print them. Then you grep through the logs of the printing to try and figure out what happened in your program. That has some flaws. There’s the other school, which is we’re going to run it again in a debugger. Maybe this is the Visual Studio view of the world. We have an old powerful debugger where we can step through our programs individually on different lines, and we can see what the variables are at this particular thing. We can just step forward, and in some cases, step back and see what the things were. That’s also very good, and both these approaches have advantages and disadvantages compared to each other and inexplicably have zealots who really feel strongly that one or the other is the right way.
What we have with Dark is the best of all things. You don’t need to start it again. You don’t need to create a test case that reproduces it. That’s automatically traced for us. We don’t need to add printfs to see the results. We don’t need to step through the thing, or step too far or back, we just put our cursor on the variable where we want to see the value. Part of my job is looking at customer code on this thing and figuring out why something goes wrong. It is trivial. You just look through a couple of things, you put your cursor on a couple of things, you see real values that are actually flowing through the system in production, not mock values, not test values. It’s very easy to see where the problems come. The interesting thing about this is just how easy it was to create. Once you have this holistic environment, once you have the editor, the programming language, and the infrastructure all together, this is the most obvious thing. Any one of you who was given these constraints would immediately come up with this as an obvious debugging tool. Then when you compare it to the existing state of the art, it’s 10 times better. That’s only because not just that we’ve removed the seams, but we’ve built this holistic tool.
AI and Darklang
The other thing I want to talk about here is AI. We are, like everybody else, pivoting into AI. One of the things that’s interesting about this generative AI world is that it relies an awful lot on context. You can do some interesting things on ChatGPT, but you get a really good result with Copilot because Copilot is integrated into your editor. There was a Stack Overflow survey on this recently, developers’ opinions of AI is extremely high. Fifty-percent of people are using them, 80% of them have high opinions about using generative AI. Ten-percent think it’s reliable, so you have problems, but it’s still loved. You can see, if you have this holistic system, the more stuff that you bring in, the more context that you have, which I suspect in my VC pitch is going to make us have much better AI results. We’re a little early on that, so I can’t actually report back how it’s going to be. I feel strongly that this idea of like holistic systems, and raising the level of abstraction brings us tools that are going to give us better AI results.
That’s my pitch. My view on things is like, complexity is just out of control. Our systems are way too hard to build. Programming isn’t really fun anymore. Programming isn’t fun because we spend all of our time writing glue code. If we instead raise the level of abstraction, remove some of this complexity, we’re going to have a lot more fun if we start developing above the cloud.
Questions and Answers
Participant 1: There’s a reason why things end up being complex, because you want to adapt to your specific like case and architecture, and whatever. I’ve been curious what the use case of Darklang is, and scale is at which this is useful, and at what point it reaches its limitation.
Biggar: Our feeling is that that’s roughly right. I wouldn’t use something like Darklang for an extremely high traffic server that needs to do hundreds of thousands and millions of requests in minutes. We tell people, don’t use this for more than 10 requests a second, which I don’t think is a right number, but it’s setting the expectation appropriately. Our hope, though, and the plan of the whole thing is to get good at this and to start being able to think of Dark less as like a language with a nice abstraction and more as an infrastructure compiler, where the application that we’re building for you is like a thing that we compile and run in the way that human engineers would have built it in faster languages and specific technologies and that sort of thing. Theoretically, there isn’t a super limit on this. In practice I wouldn’t build specialized things just yet. We’re telling people a little bit, prototype on this, because the time that you spend working out the kinks, you really want that low iteration time. Then at the end, if it’s not fast enough for you, then at least you know what you’re writing and you can rewrite it in a day or two in Go, or Rust, or whatever you want to do.
Participant 2: What does automated testing look like in this sort of environment?
Biggar: Automated testing is interesting. Firstly, we haven’t built it yet. What it looks like is that everything is immutable. Most functions are immutable. In Dark, it’s a statically typed functional language. There’s a couple of nice properties there for testing. The first of them is that you don’t need to test everything. The functions that have not changed do not need to be tested, and every change is processed in the system. Every change can figure out very easily the right tests that need to be rerun. The second part of it is that because it’s a statically typed language, it’s an ML based language like Haskell or Caml. We have the ability to create fuzz tests. We have the ability to create tests for you based on the type signatures. We think that we can reduce the complexity of testing by the same significant amount. Obviously, AI is another thing that we can throw in there. Basically, we would like it to be that you can get to 90-plus percent complexity by using the traces, the inputs from your actual users, as well as the fuzzing and automated tool, so that testing is like just a really tight, simple loop.
Participant 3: Can you talk a little bit about what kind of adoption you’ve seen so far, what kind of customers are using this?
Biggar: We have very low adoption. We are still pre-product market fit. A couple of months ago, we said, we’re moving far too slowly on this. The reason that we were moving slowly is because we were refusing to make big changes without slowly bringing our customers along and migrating customers, which was part of our value prop. What we’re actually doing right now, since we’re doing this shift to AI is we also said, ok, we’re going to leave everyone running on this old system, and we’re just going to build a new system. Our plan is to kickstart things again with that, in particular, with an AI based system. We’re in the hundreds of active users. People have been running stuff on Dark for a couple of years. It’s not in the take over the world part of the startup phase yet.
Participant 4: One thing about systems, frameworks, you build complexity you usually end up needing to understand that complexity ultimately to debug or having to get it to work when it’s not, and maybe you can click your things more regularly in terms of what’s presented in the main abstraction. I was just wondering how you think about that control and extensibility there, if that’s part of what you want to offer. How do you think about the all or nothing proposition of this, and the necessity to do something outside of your project that forces you to undertake all the complexity removing, because of [inaudible 00:38:47].
Biggar: What happens when you need to go under the hood? Dark is source available. It will soon be open source, but we’re not having to dance that yet. You can look under the hood, if you like, see what’s going on in the system. Fundamentally, people don’t really need to do that, because of the language that we designed around it. It’s an immutable statically typed functional programming language. There is not really the same level of stuff that happens under the hood, as you get with object-oriented languages, or languages where a lot of the ecosystem is written in C, kind of like Node and Python. That’s one part of it.
Your second question is this all or nothing approach. The all or nothing approach is obviously the big flaw in Dark. This is true of all holistic systems. The reason that we build the industry the way that we do is because it’s easy to slot a tool into the ecosystem around it. It’s much harder to say we’re going to replace this whole thing. Our answers to that is, one, HTTP is our foreign function interface. If you want to build stuff in Dark that you still want to keep talking to the old stuff, everyone’s building microservices, and using the right tool, just build your new thing in Dark and just speak HTTP to your existing monolith, or your existing microservices. Our goal is that people will start to build more things in Dark and eventually have this legacy stuff where it’s like slow to write things in, let’s say, Go or Rails, and it’s fast to write things in Dark. Obviously, this is an untested, still a dream, and a lot of people are naturally apprehensive about the all or nothing approach. One thing I’ll add about that is, I’m not quite an AI zealot but I do think that the ability to take your code in one language and have it automatically rewritten for you in another language is actually helpful here, because if you write it in Dark, and eventually say, no, this isn’t working for me. You can get maybe 60% of the way there with some AI trickery.
See more presentations with transcripts