>Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.
Honest question: Do you actually read any of these notes?
I think there is a fundamental flaw with not taking notes. I'm convinced taking notes forces you to properly consider what is being said and you store the information in your brain better that way.
Taking notes during meetings isn't to improve understanding, or to "read" afterwards.
They're a record of what was discussed and decided, with any important facts that came up. They're a reference for when you can't remember, two weeks later, if the decision was A and B but not C, or A and C but not B.
Or when someone else delivers the wrong thing because they claim that's what the meeting decided on, and you can go back and find the notes that say otherwise.
I probably only need to find something in meeting notes later once out of every twenty meetings. But those times wind up being so critically important, it's why you take notes in the first place.
Right, so it's for accountability instead. Have you considered generating stories or tasks from the notes in that case?
Still I think it's better to discuss "action points" in that case and give a clear owner to those points. This always helps me to understand who's accountable and what actions actually need follow up.
The question is, what artifact records the action points, the owners, who is accountable? And all the necessary associated information?
Notes do. Ideally there is a meeting owner who produces official notes and emails them to everyone, but frequently that never happens. And when it does happen, sometimes they're wrong and you need to correct them.
Which is why you need your own meeting notes. Plus, like I said, there are facts that come up that you want to document as well, that aren't part of the action items, but have value.
> The question is, what artifact records the action points, the owners, who is accountable?
I think the person you're replying to is suggesting that the shared place for recording these things in a medium-large software department would be in project tracking software like Jira or Github Projects.
And I'm saying, a lot of the time either the company doesn't use such tracking software, or it uses it for software development but not the meeting you just had with legal or finance or design or people outside the company or whatever.
The kind of stuff stored in Jira is a very specific subcategory of all the types of things that get mentioned and decided in meetings. It doesn't cover all of it, not even close. And the person putting the information in might also get part of it wrong, that happens surprisingly frequently. It's not a substitute for personal meeting notes.
You can use an LLM to generate first drafts of flashcards that you import into, and later revise in, a true spaced-repetition system — such as Mochi or Anki.
For learning new material, make your LLM assume a Socratic position. Kagi Assistant has a custom Study model that does this. The key is causing the model to increase your friction (causing learning and memory) instead of decreasing it.
I feel like you'd be better off generating (or manually compiling) the dataset you'd like to memorize and then using existing spaced repetition tools to learn that data.
I suspect it would be less effective to learn similar-yet-slightly-different LLM-generated content generated every time you want to study.
When the LLM enters the "card" into the API it would define the required information, and then every time it would ensure you cover the required information. You could tell the LLM, "make sure I remember the date of this historical event", and then the LLM would ensure you mention the date when answering the card. If you get the date wrong, then it does the API equivalent of pressing "wrong / again" in Anki.
I don’t read the notes generated by AI during meetings - but that’s not their purpose, either. The agent reads the notes, and uses that context to be better at what I’m asking it to do.
It also lets me ask questions like “When did we decide to change the login modal?” and get an accurate response.
I think it depends. Honestly, for me, the alternative to automated notes from meetings is that I don't take notes. I know, I should, but I don't. I've tried numerous times to instill the habit unsuccessfully.
Where the value for me comes from is sending them out immediately after the meeting, not archiving them in a vault I never look at. "Here's the summary of what we discussed, and the distilled action items we each agreed to take."
Like the author, I've gone out of my way to avoid hosting my personal stuff with Big Tech providers, but when it comes to work, I give in to whatever we use, because I just don't have capacity to also be IT support for internal technology. It's still uncomfortable, but I have to be honest about what I have time for.
The gist is the OP went nuts replacing Google and Meta with self-hosted tools, and now he's feeding more data than ever into Anthropic or OpenAI (didn't specify, or I missed it. Skimming AI-generated blog posts tires the eyes.)
That's par for the course, honestly. News-cycle-driven anti-big-tech sentiment is weak fuel for a lifelong commitment. Something new was going to come along.
I am always happy for anyone who felt stuck on their side projects and no longer does, though.
To be fair, OP talks specifically about that -- that's a full quarter of the post:
> I’ve spent the past year moving away from surveillance platforms... And yet I willingly feed more context into AI tools each day than Google ever passively collected from me. It’s a contradiction I haven’t resolved. The productivity gains are real enough that I’m not willing to give them up, but the privacy cost is real too, and I notice it.
>I’ve settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.
> I’ve settled into an uneasy position: Crack Cocaine for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.
Has anyone else considered that producing code faster isn't necessarily a good thing? There's a lot that goes into getting a solution correct that has nothing to do with programming. Just because you can scale code production doesn't mean you can scale things like understanding user wants and expectations. At a point you're more work for your self/organization because unless you get everything perfect the first time you're creating more work than you're resolving.
I just went through this exact thing this week. We've been working on a new feature, that if vibecoded as soon as the docs landed in our lap, would have resulted in a lot of duplicated functionality and an expanded data model. The more we worked through the solution with other engineers the more we realized the problem had been solved by another team and our solution could be small and elegant.
I can't claim that AI has no benefit to our organization, but I do think that as my career has matured, I find myself spending more time thinking about how code changes will effect the system as a whole, and less time doing the actual coding.
I agree that it isn't always a good thing. The assumption is that writing code, at some level, is one of the bottlenecks to delivery. If you "widen" the bottleneck by removing the time it takes to generate the code, your new throughput is going to create stress on other delivery areas: gathering feedback, testing, validation, approval processes, etc. I think the most effective results would come from a holistic approach to removing other bottlenecks in addition to reducing time required for producing code
> Has anyone else considered that producing code faster isn't necessarily a good thing?
This has been an relentless goal of the industry for my entire 40 year career.
> At a point you're more work for your self/organization because unless you get everything perfect the first time you're creating more work than you're resolving.
Nothing is correct the first time (or rarely). Accelerating the loop of build, test, re-evaluate is a good thing.
I think you captured yet. Not many people agree but the real world metrics speak the truth, and that is trying and failing faster gets you further then methodological planning and structured approaches.
There IS experimental evidence on this and anyones anecdotal opinion is instantly blown to smithereens by the fact that this was tested and producing code faster is provably better.
Yes, but also a lot of work in software is solving solved problems. It would be nice if we could apply AI just to that, but all humanity's previous experience with digital technology tells me that we won't
This must be why KLOCs are considered such a great indicator of productivity and why churn is used to measure code quality /s
I've worked in multiple start-ups and more mature companies, they always slow down because producing code is easier then building a product. More code is only better when quality hardly matters, which is basically never
> The bottleneck was never typing speed. It was always understanding -- understanding the system, understanding the user, understanding what "correct" even means in a given context.
This is also the problem about having "conversations" with AI boosters.
These people have been convinced of a world view that devalues *understanding*. Of course they aren't interested in *understanding* what you have to say to them.
> I've watched teams go from deploying weekly to deploying 5x/day after adopting AI coding tools. Their velocity metrics looked incredible. Their incident rate also tripled. Not because the code was worse per se, but because they were changing more things faster than their observability and testing could keep up with.
But this an improvement! The features/incident rate improves as you have more incidents but fewer in relation to the increased velocity. This may or may not be a valid tradeoff depending on the impact of incidents.
At least in my org we have an understanding that the product side will have to change drastically to accommodate the different rates of code development.
Both my personal and social circles experience has been these tools are spotty at best. They often miss important things, overemphasize the wrong things, etc. At a surface level they look good but if you actually scrutinize them they fall apart.
> At a surface level they look good but if you actually scrutinize them they fall apart.
This is overwhelmingly true for AI generated code in my experience.
FWIW it makes me highly discount the perspectives of internet commenters who argue that LLMs generate "better than human" or even "mostly working" code.
The top 25% percentile of coders in my org definitely code better than most agents. The rest? I trust the output of an LLM to be far more consistent when adding/deleting features across service layers than a human that can create accidental typos. Same thing with bog-standard React components or Docker build scripts.
I think a better take is from the MIT study last year, the one that found almost all AI pilots failed.
In that study they found that pretty much everyone was using AI all the time, but they were just using their personal accounts rather than the company provided tools (hence the failures)
In light of this, I'd say there is a very good chance that people are offloading their work on AI, and then taking that saved time for themselves i.e. "I can finish the job report in 30 minutes rather than 3 hours now, so by 9:30 I'm done with work until after lunch."
The end result of this will either be layoffs to consolidate work or blocking of non-company monitored AI ensuring they can locate those now empty time slots.
I'm curious if blocking non-company AI is even possible. It's very easy for me to imagine someone, say, turning the wi-fi off on their phone and using the claude phone app, or texting their openclaw.
> > The actual gains are granular and personal, which makes them hard to count and easy to dismiss.
> It also means the trillion dollar valuations might be bunk?
Yes, unless the selling point is the institutional and social instability you can create by handing LLMs to technically incapable users and telling them they can write code now.
> What products? This blog post is long on vibes and short on evidence.
I think this is an uninteresting question. Almost every company is putting AI produced code into their products now and has been for years. Whether it's entirely vibe-coded or not is beside the point.
I'm working on 4 kubernetes operators that we use internally at work in production currently. 3 of them were handcrafted, one of them was vibe coded using the other 3 as a template. Almost all of the work being done on all 4 of them is now done by AI, whether it is copilot or cursor or claude code. Stuff that used to take me days or weeks now takes hours.
Just to give one example -- yesterday I added a whole new custom resource to an operator with some quite complicated logic that touched 8 or 9 different kubernetes resources. It's not a hugely complicated task, and I could have done it in 2-3 days by myself. Claude Code essentially one shotted it in 15 minutes, including tests. It misunderstood some things, it made some judgement calls that I didn't like in terms of spec, it wrote some new code instead of using pre-existing code, but fixing that took another 90 minutes or so, then I was done.
You can put your head in the sand all you want, but the latest versions of the LLMs running in Claude Code are the real deal. The produce code 5-10x faster than an engineer working by themselves, and it's almost always better code than they would have produced, with more documentation and tests, and even better written PRs comments and Jira tickets.
If you want to talk about valuations, consider now that there is a very real conversation about hiring vs spending more on tokens and spending more on tokens almost always wins. Anthropic is going to be absolutely printing money over the next year, and I would not be surprised if they turn a profit in two years.
I had a thought about this coming from the book "Seeing Like a State."
Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.
one thing it did massively for me was save me time from questions like should i go with X or Y option questions. before i used to just think longer about tradeoffs but with AI it became a lot faster. no more procrastination due to decision fatigue.
I’d take this a step further and say that the deployment failure isn’t just management failing to provide training etc
If you take 100 people not all of them will have the intellectual curiosity, enthusiasm and flexibility to turn their ChatGPT license into productivity gains. No amount of training will overcome a fundamental lack of curiosity & willingness to experiment
And in very corporate environments there are lots of people like that who thrived just fine thus far because everything is written down in a step by step policy etc.
For one thing they were just early. Whatever measurements people made of AI six months ago are invalid. It’s a different animal now.
Plus you get a wildly different payoff the more you can take humans completely out of the loop. If it writes the code but humans review, you’re still bottleneck. If it designs and codes and reviews and goes back to designing, and so on, there’s no effective speed limit.
Big businesses aren’t going to work that way though. Which is why we shouldn’t be looking to them as thought leaders right now.
That's because you're getting left behind. The technology is outpacing you because most likely you're not using it right. Also likely you're not in an environment that pushes you to use it right so you just give it half assed attempts, never putting the initial effort to up your game with AI.
At my company, if you don't use AI, you're productivity will be much slower than everyone else and that will result in you getting fired. The expectation is 3-4 PRs a day per person.
Bro no need to be snarky. You're not useless to the economy. You're in the process of becoming more and more useless. Unlikely to be completely useless but AI is for sure eating away your job. Denying it and acting like this is just delusional coping.
I'm not singling you out. This applies to all of us, you, me, everyone.
I'm glad that this is making this individual more productive, but to quote the Fortune article:
> “AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”
So I don't feel like TFA is a necessarily a rebuttal to this. The proof would be in the pudding.
His argument is that this isn’t a failure of AI to perform as advertised, but a series of deployment failures a businesses. He theorizes that they’re buying a million licenses for chatgpt or copilot, dumping that in the laps of employees, and assuming the results will just… “show up”.
So I guess he’s making the case that the tools are good… the employees are just holding it wrong.
> Meeting notes are the obvious one. Before Granola, I’d either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.
Yikes. So, 1) meetings at your company suck. In general, you should be engaged and take short, summary notes and todos while you're there; no need to have a transcript or AI summary. Talk to your manager about getting meetings right. 2) "without thinking about it" might not be the best phraseology in this overall context. :)
While the idea in the post is an interesting one, the analogy to planing is terrible. The difference in results from a power planer and a hand plane (even with a pretty basic blade) is night and day. Wood planed with high quality and sharp steel has a finish that doesn't even need oil or varnish.
People talk about how non-AI code will become an artisanal craft and I think it's a bit of a stretch. The one exception might be when code has an intrinsic aesthetic quality in itself, rather than just the functional output, something like the obfuscated C code competition entries. Hand-worked wood might be crappy too, like a school woodwork birdhouse project made by a beginner, but a truly artisanally crafted piece of furniture or cabinetry has a very tangibly superior output to an IKEA bookshelf or other industrial stuff.
On the point of doing work for the sake of doing work and not for the sake of the value of the output, this is nothing new, as suggested in the blog post. But the more apt analogy would be all the "bullshit jobs" that have existed for decades in modern corporations. People who expand their teams to justify more budget to hire more people to create more work to expand their teams to get bigger budgets, etc. All the while producing nothing of real value in the company. The thing that AI seems to have done is accelerated and exaggerated this tendency, maybe since it was already the natural tendency within the logic of our corporate work culture.
Here's what this says between the lines / in a roundabout fashion:
---
LLM tools were sold to firms as game-changers. The truth is that they work, but by saving bits of time and mental energy here and there. Those savings are meaningful in the aggregate and compound over time.
But there are a few catches.
First, most of the savings is realized by integrating the tools into workflows that are personal and often sui generis. That takes, on the one hand, know-how (but firms deployed the tools without clear direction and without training workers); and, on the other hand, time and effort (but many of the largely untrained workers aren't investing time and effort to do so, even if they have the know-how).
Second, even where the training, time, and effort hurdles are cleared, the benefits inure primarily to the workers themselves: less time and mental energy spent doing what appear to be coordination tasks — ones that, so far, have been considered necessary but do not meaningfully represent the firm's actual value creation. For the benefits to inure to the firm (and thus appear on the radar of CEOs like those interviewed for the Fortune survey tangentially mentioned in TFA) the workers would need to "reinvest" the time and mental-energy savings in activities directly causing the value the firm is presumed to produce. They have little incentive to do so and they do not. Hence no increased firm productivity.
And third, even where the "reinvest your new-found ease to benefit the firm" hurdle is overcome — here I'm visualizing Lumbergh's "Is This Good for the COMPANY?" banner — firms often have to trade their private data for those hard-won benefits.
Thus the author arrives at his main conclusion: he will happily use "AI" (read: LLM tools) at work, on the company dime and with company data, to make his life easier, but he won't use the tools in his private life because he's privacy conscious.
The other conclusion is that firms could theoretically derive benefits from LLM tools — but they haven't figured out that, in exchange for their own valuable data, over and above the monetary cost of the LLM tools, they need to train workers to use them; ensure workers use them, and properly; and ensure workers then do even more intensive work to drive the value creation the company is supposedly engaged in, instead of simply making their own lives easier.
---
My personal conclusion is that people are going to start seeing even more clearly than before that most of what they and their colleagues do during most of their time is busy work that creates no actual value.
A lot of anecdotes here and in the article so I’ll add my own.
AI isn’t a silver bullet. It takes many iterations to get right. Yes, there is a lot of on-the-surface-it-looks-correct-so-ship-it stuff going on. I cringe when someone says “Well AI says..”
I don’t care what AI says! Unless you have done the research yourself and applied your own critical thinking then don’t send me that slop!
That is to say, there are some really good LLMs out there. I started using Claude and it is better for code than ChatGPT. But, you must understand and appreciate the code before you push it.
Show what you build; prove the productivity gains by working out through the extra 20 minutes you save everyday. Prove all this stuff instead of just saying "oh yeah bro I'm totally more productive with AI." It is trivial to track these metrics if you're serious about your productivity as an individual. The article is big on words but fails to show even 1 good effect of increased productivity or if it even exists.
The article mentions that the survey is wrong because the productivity gains do not show up in the metrics, etc. But what about your personal metrics? What projects did you ship, how many per week, what was the total amount of minutes saved per week, how did you use those minutes instead?
Otherwise its just productivity theater.
Most people never use a LLM assistant beacuse their lives aren't complicated enough to require a dedicated 24x7 assistant.
I don't want to put OP on blast here, but this is unfortunately just complete slop writing.
The points being made are fine, I think, but look, if it's faster for you to generate than it is for us to read, I think this qualifies as denial-of-service-lite.
I keep happening across articles claiming that AI doesn’t actually increase productivity and I’m completely confused.
I used to debate with people about this, but it didn’t really change anything. Now, I just shrug and continue on with my work and, if someone asks, I help them use AI better.
My main worry now is when the AI bubble is going to burst, and what’s affordable now becomes unaffordable.
If you were already experienced and productive, it does very little for you beyond summaries, a little boilerplate, and possibly search help.
If you were unproductive, it allows you to be more "productive" while stalling or reversing your learning and growth.
Of course, person number 2's newfound "productivity" comes at the expense of leeching productivity away from the experienced and productive people by overloading them with reviewing and validating their non-deterministic generated spaghetti.
It amazes people who think pumping out code is the hard part of a project, when in fact that's the easiest part...
We've apparently collectively forgotten that lines of code is one of the worst metrics for measuring productivity.
> If you were already experienced and productive, it does very little for you beyond summaries, a little boilerplate, and possibly search help.
It sounds like you already have your mind made up about AI, but I disagree. The rest of your comment make assertions and assumptions to points I did not make, so I'll leave those alone and leave them for someone else to address.
Another article claiming productivity without providing evidence of the quality of the work. How do we know these meeting summaries are accurate? And why are meeting summaries so great, anyway? I never had them before.
I don't really understand what it is with CompSci graduates and their bizarre aversion to handwriting, note taking, and any kind of skill that's derived from arts disciplines or "average joe" office systems.
Shorthand notation exists and it's more than possible to develop your own. I'd trust a OBS recording going in the background over some AI slop that has some chance to micro-hallucinate what it's hearing. It also sounds like a skill issue that the author can't control the pace of his own meetings to where being able to take good notes is seemingly impossible.
The author's AI use cases seem like a band-aid to cover bigger problems. Let's not even get into the part of the blog post where the author has started delegating internal thinking and reflection to conversations with a LLM.
My problem with this article is the author didn't really provide any advice on how to hold it better.
The AI note taker sounds genuinely useful but beyond that he never discusses the actual techniques that he used to go from 1 week to implement a side project to 1 day.
On the other hand, where does the expectation come from, that you can be just as effective at using a tool as someone who actively used it since GPT-3.5? An OpenCode instance loaded with the latest frontier model is, to quote a poet, a rocketship to nowhere - it's on you to steer it towards the results you want to achieve.
Check out also Hyprnote which allows you to do the meeting transcription and note enhancement fully locally, or wherever else you want, with a BYOM approach.
Honest question: Do you actually read any of these notes? I think there is a fundamental flaw with not taking notes. I'm convinced taking notes forces you to properly consider what is being said and you store the information in your brain better that way.
reply