A big loss for the Emacs community! emacs-aio is great!
I see the author is spring cleaning:
> I've
turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so
some of these things I no longer use. On Linux I now use KDE on Wayland
with a minimally-configured browser. I miss the power user features, but
I do not miss the friction and constant maintenance.
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.
Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.
I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case.
All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.
> I don't think a lot of people understand what they're missing
Very true. There's an enormous tacit knowledge gap. Check this out:
I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...
Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.
So? My terminal has the same full system access. If I didn't use Emacs, I'd be using Claude code in it. It's contained locally on my computer, I don't see any problem here. I use Emacs like my OS-layer. Why would I complain that my OS has access to something? It would be weird and annoying if it's the opposite.
cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.
Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.
If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
Our lives are much more than our computing environments. By surrendering a bit of control of our computing environments we free up our brains to devote to other things in life: loved ones, pets, gardening, home maintenance, other hobbies and sports...
Millions of happy Apple users can't be wrong on this.
Maybe, but for some of us, the peace of mind comes from stability and minimal friction with our tools.
Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.
If the author is on, I'm curious why he chose wxWidgets instead of Qt; I'd be surprised if it is that much lighter weight than Qt. (I even wrote my own cross-platform toolkit with "more lightweight" as one of the reasons, and if you use all the features, it weighs in about the same size as Qt, I think.) Also, the last time I used wxWidgets, many years ago, it had a clunky MFC style to it, limited feature, along with a rather Windowsy look and feel. Have those things changed?
"The" future of software engineering is a silly thing to predict. I might predict one substantial change is that we get our house a little more in order about universities and the private sector distinguishing between computer science, software engineering, and software development. Obviously they are not cleanly separated[1], but LLMs will affect each subfield very differently.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans!
[1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
To toss them because the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
> Tons of companies are still fixing the losses from vibe coding.
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
I used emacs full-time for many many years. Then I switched to Vim or other editors with Vim modes, also for many years. I have to be honest, I don’t see a particularly clear winner between them. Model editing is a bit unusual in many ways. There are some things that it certainly makes easier, but I personally found that the overall process of editing and writing code in real time for me was more efficient in a single mode emacs.
> I don’t see a particularly clear winner between them
Because deep down they are incomparable categorically. Separate the tools from the foundational ideas and you see the very different value. Vim-model of text navigation is fantastic, practical, brilliant idea. Once you grok it - you can take it anywhere. You can use it in your editor, browser, terminal, WM. Emacs is rooted in another, even more brilliant idea of practical notation for lambda calculus. These ideas have no overlap. But understanding the philosophy of each (ideally both) could open so many different possibilities.
My vim muscle memory has paid off more for me than my emacs muscle memory. Emacs was the better editor, though. Anything that doesn't have Vimscript is an automatic winner IMO.
The author is the developer of the RSS reader Elfeed, which a lot of Emacs users use several times a day. Though the article talks about a vibe-coded wxWidgets-based GUI application called Elfeed2 that he wrote as a replacement, Emacs afficionados would be loath to leave their Emacs environment and switch to that. Hopefully Emacs elfeed finds a new maintainer.
I tried Elfeed2 immediately after the announcement, well, it's nowhere near the experience of elfeed in Emacs. Elfeed2 doesn't load content for most of my feeds, elfeed does. I also integrated elfeed-tube, which shows previews of videos and their transcripts, making it no-brainer to get a summary without watching the whole video.
Probably because it's closer to a reimplementation than anything else, and in Emacs you can use libraries with much less friction than in self-contained languages.
My usage of emacs is so vim-like that I’ve tried switching a few times. Vim is definitely faster, and overlays and cursor placement is much simpler and more intuitive. But there were still feature gaps and configuration issues that prevented full adoption.
I've been retired from emacs for several years now but I'm still looking for a magit replacement that is independent of my editor. Vscode's magit extension is really good but i split my time between IntelliJ and vscode.
#!/bin/sh
if [ "$(git rev-parse --is-inside-work-tree)" = "true" ]; then
exec emacs -nw -q --no-splash -l "/path/to/magit-init.el"
fi
It worked well for me because I can reuse all my keybindings (evil + leader keys with `general`) and my workflow is fully in the terminal. (I have since moved on to Jujutsu, and `jjui` is filling this gap for me right now, but it's not quite a magit-for-jj).
Honestly, magit is just a masterclass in UI design. It makes most everything incredibly easy to do while still giving you the ability to tweak things if you need to.
I was wondering how people feel about this trend. LLM allow you to free yourself from foundations (frameworks, programmable programs) to just generate any support layer you want from old or new libs. This is all very understandable.. yet I find it a loss, in the lisp world, having a core model and semantics shared by all the upper layers means ease of reuse (for instance people leverage emacs calc classes in other places), llm allows for easier fragmentation..
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
Why? What makes Spacemacs so different/special that it requires some kind of distinct opinion that would be extremely valuable? Spacemacs is the same old Emacs with some out-of-the box customizations atop - there's nothing fundamentally different about it.
Spacemacs "is not batteries-included" version of Emacs. You say that and people may get confused. It's not a "different version" of Emacs, it's not Emacs at all - it's an Emacs config you can configure - a meta config. It is more like a collection of recipes you can run on Emacs. That is an important distinction.
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
> With my newly-acquired superpowers I could knock out the last two pieces in a few days’ work
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
It's not AI psychosis, you're interpreting what he said to the extreme.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
> It's deeply distressing to watch people fall into AI psychosis.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
> Being smart, accomplished, or experienced is no defence.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
I just wonder how jobs like that won't replace their employees. Seems too good to last. In a few years OpenAI will just sell $1,000 per month Human-free Agent Coding for businesses.
Saying they have psychosis is a rude exaggeration.
AI psychosis is to have a toxic relationship with a chatbot as if though it was a real person. It has nothing to do with engineering. You're muddying your own point by conflating all LLM use with some kind of delusion. There is a lot of nuance in this space and you're not doing yourself any favors by ignoring it if you're an engineer. There is no bubble pop, other than a straight up apocalypse, that is going to put this genie back in the bottle. Models are trained. Tools are built. There isn't a single industry that cares about artistry more than efficiency. It's here to stay, it's getting better, and if you don't know how to use it, you're going to have trouble finding work.
Not writing code isn't the same as vibe-coding. You can stay on top of AI, make it rewrite the things that look bad, make it refactor until you're happy with how things look, etc....
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
> "No. AI is a must for software development. It's non-negotiable."
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.
Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).
I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
You still don't get where I'm coming from. The AI takeover of programming is inevitable, and I hate it. But my feelings don't make the brutal economics go away. A skilled developer can now accomplish in days what used to take weeks or months with proper use of these tools. Period. I know this because of the absurd number of skilled developers here, on X, Mastodon, and elsewhere, saying "with AI I'm accomplishing in days what used to take me weeks or months". And if you have the opportunity to make use of the tools, you have to be stupid, or you're cutting off your nose to spite your face, not to.
I see the author is spring cleaning:
> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.
https://github.com/skeeto/dotfiles/commit/df275005769b654618...
> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.
https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
My .emacs config has improved and I wrote my own Emacs based coding agent https://github.com/mark-watson/coding-agent
https://poyo.co/note/20260202T150723/
I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
—
¹ https://github.com/agzam/death-contraptions
² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
Very true. There's an enormous tacit knowledge gap. Check this out:
I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...
Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.
This is what gives me the most pause.
For me the friction always comes when I try to use the internet without it
solid extension, big fan
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
this exactly. most people can’t set it up that well.
Millions of happy Apple users can't be wrong on this.
Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.
For you, perhaps.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans![1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
This sounds like unsubstantiated hyperbole - can we keep HN grounded in reality, please?
My alternative hypothesis - you don't like agentic coding or maybe LLMs in general. Not helpful for the group.
Because deep down they are incomparable categorically. Separate the tools from the foundational ideas and you see the very different value. Vim-model of text navigation is fantastic, practical, brilliant idea. Once you grok it - you can take it anywhere. You can use it in your editor, browser, terminal, WM. Emacs is rooted in another, even more brilliant idea of practical notation for lambda calculus. These ideas have no overlap. But understanding the philosophy of each (ideally both) could open so many different possibilities.
Isn't that kinda expected with a new software release, that it doesn't have a 100% feature parity?
Anyone know of something like this?
https://flathub.org/en/apps/io.github.aganzha.Stage
The IntelliJ git client is my favorite by far, I am curious what do you not like about it?
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
You're right Spacemacs is essentially a batteries-included version of Emacs.
[1] https://nullprogram.com/blog/2017/04/01/
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
[0]: https://nullprogram.com/blog/2026/03/29/
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
Saying they have psychosis is a rude exaggeration.
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
???
>You are not using the tools correctly.
Stop being deluded, man.
When this crap collapses into itself you will be in tears back asking for the knowledge you failed to get without the fancy Clippys.
Now, stop that fancy Megahal chatbot and learn to do things by hand.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
I've been using it since 1994.
Whoa, shit, I'm old.