Coté

Coté

What AI is good at, or, please don't fuck up my job and ETFs

I’m clearly a big fan of AI and believe it’s helpful in many ways.

I feel comfortable with that because I’ve used it for over two years now and rely on it daily for a wide variety of tasks, both work- and personal-related. That means I know exactly what it’s capable of, what it’s good at, and what it’s not good at. Me and the robot have a good relationship: we know how to work with each other.

From Katerina Kamprani's The Uncomfortable collection.

Generative AI is good at text

Right now, generative AI is only good at working with text. It generates text—if you can reduce audio to text, it excels at that, and if you can convert text to audio, it’s equally proficient.1

Text can take many forms, and generative AI handles them well. As others have noted, if you want to shorten text, it’s almost amazing at that. If you want to summarize text, it’s pretty good at that. And if you need a summary to help decide whether to read the full text, it’s fantastic.

Learning and strategy

If you want to learn and understand a topic, part of that process involves condensing large amounts of text into a shorter, more digestible form—and it’s pretty good at that. All of the consumer agentic things are going out there and searching the web for you to find that text and then summarizing it all. If what you want to learn and understand is well documented on the World Wide Web, it is good at that. If you want to get insights into secret, obscure, poorly documented things - stuff that has little public text - the AI’s Deep Research is going to be shallow bullshit.

Even when it’s good, with learning and understanding, you need a finely tuned bullshit detector. Once you detect the bullshit, you can ask again or go search on your own. But really, you need a bullshit detector in all of life—robot or meat-sack. If you don’t have one, build one quickly. The benefits you get from that will last longer and far outweigh the benefits you’ll get from AI.

This aspect of learning means it’s not so great at company strategy. If you and your competitors are all using the same public text, you’re all going to get the same answer. There will be no competitive advantage. What’s even worse now is that it’s effortless for your computers to understand your strategy and predict the ones you’d come up with…if you only based it on public text. You have to figure out how to get your secret text into there. With all interactions with the robot, you have to bring a lot to the chat window. The quality of what you bring will determine the quality you get from the robot. Garbage in, garbage out. Which is to say: nothing valuable in, nothing valuable out.

Back to pedantry: It’s proving to be good at teaching-by-modeling: it shows you what The Big Report could look like, explains what the fuck those 80 slides your teacher gave you are asking you to do in your essay, and serves as an additional tutor and instructor when you can’t afford to hire one.

From Katerina Kamprani's The Uncomfortable collection.

Writing & creating

The robot is also effective as a co-writer. In other words, you become a co-writer with the robot. It can generate text endlessly, and if you collaborate with it, you’ll get great results. Just as you would with any co-writer (especially a ghostwriter or a co-author whose name appears in smaller print on the book cover), you need to get to know each other, learn how to work together, and figure out the style you want. Claude is great in this regard—it has a simple tool for learning and refining your style. If you haven’t spent time teaching the robot your style, you should do so.

You can reduce videos, podcasts, scripts, even small talk, to text. Recall, AI is good at text, so it will be OK at that.

It’s okay at imagining. I play D&D with it, and it has gotten a lot better at handling mechanics over the past two years, but it still remains rather boring and predictable when it comes to unassisted imagination. If you feed it published D&D adventures, it does okay. But just try having it come up with ten dwarf names—they’ll all be variations on “Iron Shield” or “Rock Breaker” and the like.

It’s really good at writing code. And guess why? Code is text. Is it good at creating multi-system applications and workflows used to, say, coordinate how an entire bank works? Probably not—very few people are even good at that. And then there’s the whole process of getting it into production and keeping it running. If you think the robot can do that now—or ever—¡vaya con Dios! Please report back if you survive.

The AI is bad at perfection

What about tasks like optimizing supply chains? Maybe one day the robot will be good at other B2B tasks, but I suspect that for many years good old fashioned machine learning will keep doing just fine there.

Don’t use AI for tasks where being 100% correct is important. If the consequences of being wrong are dire, you’re going to get fucked. Worse, someone else is going to get fucked.

But, if you’re using the robot for a system that tolerates—or even thrives on—variety (errors), it’s great. “Anti-fragile” systems? I don’t really know what that means, but: sure. Are you brainstorming, whiteboarding, and exploring? Yup, great at that. Using it for therapy? It’s fascinatingly good at that.

You get the idea: if you’re using generative AI for something where you can recover from errors quickly, there is “no right answer,” and the task is text-based, then yes, it is great for that—and you need to start using it now.

From Katerina Kamprani's The Uncomfortable collection.

Thirty days to defuse the time bomb of false expectations

Let’s build up to my concern:

  1. Text is all generative AI is currently good at.

  2. Most people have not used AI for 12 months, let alone two-plus years, let alone 30 days. I’m just guessing here. Surveys tell dramatically different stories. But most surveys show only a small amount of use, and just recently.

  3. So, I don’t trust that most people yet understand what AI is good at—they often imagine it’s capable of far more. You have to use it to know it, and learning by doing is a lot of effort and usually takes longer than your ROI model’s horizon.

That’s “hype,” sure, but it’s more like misunderstood inexperience. It’s classic diffusion of innovation (ask Chatty-G to tell you about that concept; I bet it’ll be pretty good). Sure, that diffusion has been getting faster, but if humans are involved, we’re still talking decades—at least one decade.

My concern here is that once we collectively set expectations beyond reality, the fall is bigger, and the recovery becomes too great. Worse yet, people waste a lot of time chasing AI fantasies. They thought there’d be 100x returns when, in reality, there were only 10% or even 25% returns. You fire employees, take on investment and risk to overhaul your business, and spend time on those AI fantasies instead of pursuing other strategies. And then, when you learn what AI is truly/only good at, you’ve invested everything—only to find that your assumptions, ROI models, and, thus, investment was a fantasy. Plus, once you build it, you now own it forever, no matter how shit it is. Plus, you played a game of chicken with opportunity cost, and opportunity cost won.

So, don’t do that. Don’t bet the farm on something you haven’t used firsthand for at least 30 days, and certainly don’t stake our jobs or our index funds on it.

Wastebook

  • “I was the man of my dreams," Peter on Peter.

  • “the unexampled,” on Gary Snyder.

  • And, from Gary: “this romantic view of crazy genius is just another reflection of the craziness of our times… I aspire to and admire a sanity from which, as in a climax ecosystem, one has spare energy to go on to even more challenging – which is to say more spiritual and more deeply physical – things”

  • “Mandatory Commute policy,” synonym for RTO.

  • “autogolpe,” self-harm.

  • “If you change it, you own it,” if only.

  • ”monomaniacal dork squads,” power-up.

  • “a steaming pile of, um, DOGEshit,” deep analysis.

  • “Our Son of a Bitch,” various.

  • “You can’t sell a sandwich with secret mayo,” Noah’s quest continues.

  • There’s a first time to forget everything.

  • “[rhapsode]([en.wikipedia.org/wiki/Rhap...](https://en.wikipedia.org/wiki/Rhapsode#:~:text=A%20rhapsode%20(Greek%3A%20)ῥαψῳδός%2C,BC%20(and%20perhaps%20earlier).).”

  • “The Deeply Spiced Meatballs That Call Back to Haiti.”

  • “Features of the future,” a CF Day topic.

  • When submitting a conference talk and given the option to select “audience level,” I’ve started always selecting “intermediate.” I don’t know why, or what that means, but it’s some kind of fun.

  • “LLM aka Large Legal Mess,” don’t use the robot for lawyer-shit.

  • “inspo,” AI hair.

  • “If I’d wanted chatGPT to answer, I’d have asked chatGPT” @byronic.bsky.social.

  • "My leather jacket tailor never flinched, so I'm not sure what's wrong with all the Finance Bros."

  • Deep is the new plus.

Relative to your interests

Predictably, a bunch of AI stuff of late.

  • The reality of long-term software maintenance - “In the long run maintenance is a majority of the work for any given feature, and responsibility for maintenance defaults to the project maintainers.” Related:

  • Top EDI Processes You Should Automate With API - Tech never dies. Helpful consequence: take care of it before it takes care of you.

  • How’s that open source licensing coming along? - ”The takeaway is that forks from relicensing tend to have more organizational diversity than the original projects. In addition, projects that lean on a community of contributors run the risk of that community going elsewhere when relicensing occurs.”

  • Key insights on analytical AI for streamlined enterprise operations - ”The big issue, whether it’s generative or analytical AI, has always been how to we get to production deployments. It’s easy to do a proof of concept, a pilot or a little experiment — but putting something into production means you have to train the people who will be using this system. You have to integrate it with your existing technology architecture; you have to change the business process into which it fits. It’s getting better, I think, with analytical AI.” // It’s always been about day two.

  • Why I think AI take-off is relatively slow - My summary: humans resisting change is a bottleneck; also, humans not knowing what to do with AI; current economic models can’t model an AI-driven paradigm shift, so we can’t measure the change; in general, technology adoption takes decades, 20 for the internet, 40 for electricity. // AI is a technology and is prey to the usual barriers and bottlenecks to mass-adoption.

  • GenAI Possibilities Become Reality When Leaders Tackle The Hard Work First - Like any other tool, people have to learn how to use it: “Whatever communication, enablement, or change management efforts you think you’ll need, plan on tripling them.” // Also, garbage in, garbage out: “GenAI can’t deliver real business value if a foundation is broken. Too many B2B organizations are trying to layer genAI on top of scattered, siloed, and outdated technologies, data, and processes. As a result, they can’t connect the right insights, automations stall, and teams are unsure of how to apply genAI beyond basic tasks.”

  • A.I. Is Changing How Silicon Valley Builds Start-Ups - ”Before this A.I. boom, start-ups generally burned $1 million to get to $1 million in revenue, Mr. Jain said. Now getting to $1 million in revenue costs one-fifth as much and could eventually drop to one-tenth, according to an analysis of 200 start-ups conducted by Afore.” // Smoke 'em if you got 'em…

  • The AI Experience - What’s Next For Technology Marketing - Back-up the truck and dump the enterprise enterprise marketing slop: “Did you consider that soon you may be marketing to GenAI agents of your customers?” // And: “While the term “Account Based Marketing” or ABM is still floating around, less marketers are focused on continuing to enable personalized marketing for a subset of the customer and prospect base.” // Instead of having to craft the personalized content, you have the robot do it. Then the marketing skills you need to back to mechanics of running campaigns. // Yes, and, this is an example of my “bad things are bad” principle. If the slop you get is bad, it will be bad. But it can also be good, in which case, it will be good.

  • How Ikea approaches AI governance - ”Around 30,000 employees have access to an AI copilot, and the retailer is exploring tailoring AI assistants to add more value. Ikea is also exploring AI-powered supply chain optimization opportunities, such as minimizing delivery times and enhancing loading sequences for shipments to minimize costs. AI in CX mostly targets personalization. // ‘“I’m not just talking about generative AI,” Marzoni said. “There’s some old, good machine learning models that are still absolutely delivering a lot of value, if not the majority of the value to date.”’

  • U.S. Economy Being Powered by the Richest 10% of Americans - One estimate: in the US, “spending by the top 10% alone accounted for almost one-third of gross domestic product." // Never mind the, like, morals?…doesn’t seem very anti-fragile. Never mind the, like, morals?…doesn’t seem very anti-fragile. // “Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.”

  • Why it’s nice to compete against a large, profitable company - Because, they can’t lower prices on their core products least Wall Street freak-the-fuck out.

Logoff

See y’all next time! Gotta go run a few ideas by my pal, the robot.

1

It can kind of convert text to images, but only if you like the same people over and over or are an anime fan. If you like a perfectly chiseled chin, AI generated images are for you. You can put a lot of work into getting your text in shape to produce something unique that looks real. In this respect, it gives a tool to people who can’t do graphics (like me!) which is actually pretty amazing. But it can only go so far. Just try to create a “realistic” looking person instead of a perfect fashion model. It is near impossible. Of course, this is because it’s not trained on enough images yet, I guess.

@cote@hachyderm.io, @cote@cote.io, @cote, https://proven.lol/a60da7, @cote@social.lol