How Hard Can It Be?
Commonplace, №4
A note on numbering: Commonplace entries don’t arrive in order. The form is older than that — Renaissance thinkers kept these books for decades, adding entries as life surfaced them, trusting the collection to eventually cohere. This is №4 because №4 is where we’ve landed. The others will find their numbers in due course.
It’s a random Tuesday in April 2026; you’re on your second coffee of the morning. A headline slides past: some model you haven’t heard of has been released, or updated, or benchmarked against another model. SWEBench Pro is singing its praises, showing you all the bar graphs about how the new model did on “Humanity’s Last Exam” (which you’ve never even heard of). Your X feed is filling up with tech bro’s eagerly offering a $199.99 vibe-coding course on how to integrate the new tool into your workflow.
You file it in the growing folder in your head labeled things I should probably understand and use. That folder has quietly been expanding for the three years since ChatGPT reared its ugly head (sorry Chat – no shade intended). So you don’t open it. Instead, you open your Gmail. You check TikTok, IG, or X.
This avoidance is our natural response to a stressor – we avoid it, we distract ourselves, we convince ourselves that AI isn’t “that good.” Meanwhile, the field has moved faster than our human attention can track. Someone on X famously tweeted that when it comes to AI: “You need to be unemployed just to keep up.” Most people, thankfully, are not unemployed. (But give AI a chance, and it could happen next quarter! Just kidding…kind of.)
Somewhere on the Internet, there are people who read every paper on Arxiv and have strong opinions about attention mechanisms; that ain’t us. We are the people who decided not to pursue a CompSci degree, the people who sit with teeth clenched when CNBC gleefully announces “middle management is dead” or “Welcome to the Saaspocalypse.”
This missive is for the much larger group of intelligent, literate people who have registered that something significant is happening, feel a low-grade obligation to engage with it, and cannot quite figure out how to make the most of it. You have tried ChatGPT. You were unimpressed, or impressed, or both. You have a vague sense that everyone else is using it (or maybe your boss is demanding it) – but integrating it into your work takes more time than doing the thing your own damn self. The feeling, if you had to name it, is not fear exactly. It is something more like being tired before beginning.
But today, walking away is no longer an option. It’s time to steal a hack from one of the world’s most successful tech entrepreneurs.
Here it is.
Jensen Huang is the CEO of Nvidia, which at the time of writing is one of the most valuable companies on the planet, and he has been running it for 33 years. In October 2023, he sat down for a long interview with the podcast Acquired. Toward the end, the hosts asked him a straightforward question: if he were 30 years old again today, would he start Nvidia?
He said “No.”
He said that building the company had turned out to be about a million times harder than he had ever expected, and that if he had known clearly, in advance, what the journey would demand — the pain, the vulnerability, the embarrassment, the shame, the full list of everything that goes wrong — nobody in their right mind would start a company.
Then he said something I have not stopped thinking about since. “The superpower of an entrepreneur,” he said, “is that they don’t know how hard it is. They only ask themselves: ‘How hard can it be?’”
He was not joking, exactly. He added that, to this day, three decades in, he still tricks his own brain into thinking “how hard can it be?” about everything he does. It is a running interior practice. A cognitive sleight of hand he performs on himself, knowingly, every day.
This is, when you first hear it, an odd thing for one of the most successful entrepreneurs in the world to admit. We expect our CEOs to project clarity. Infinite strength. Perfection. We expect them to tell us that they saw it coming, that they knew exactly what they were doing all along, that the path was always visible to them. Jensen’s authenticity shows just how raw and open he is, when he tells us the opposite: that the path was not visible to him, that if it had been visible, he would not have taken it, and that the productive (self-imposed) ‘not-seeing’ device was, in itself, the very thing that freed him up to begin. When we focus too much on the difficulty, and the challenges that arise, we talk ourselves out of doing “the thing.”
So here’s the move: not denial, not optimism, not “you’ve got this!” — but the deliberate decision to defer the question of how hard something is until after you’ve already begun. The hardness is real. You’ll meet it from inside the doing, where it’s tractable, instead of from outside, where it’s paralyzing.
Let that cook.
Nineteen centuries earlier, the Roman emperor Marcus Aurelius was writing private notes to himself, almost certainly not intending them to be read by anyone else, in what we now call the Meditations. Marcus had what is arguably the worst job in history: he ran an empire he had not asked for, during a plague he could not stop, on the frontier of a war that would not end, while grieving children who kept dying. He had one consolation: philosophy – introspection – contemplation. He used philosophy as a kind of interior maintenance — the way you might change the oil in a car you cannot afford to let break down.
Scattered throughout the Meditations is a particular instruction that he gives himself, over and over, in slightly different words:
Do not contemplate the whole of your life’s burden. Do not tally the full weight of what is being asked of you. Look only at the thing in front of you. Do this one thing, now, and then the next. The burden of an emperor is not carried by imagining the burden of an emperor; it is carried by writing the letter in front of you, receiving the envoy in front of you, making the decision in front of you. The totality will kill you if you look at it. The particular, you can handle.
This is not stoic cheerfulness. It is not the Roman equivalent of you’ve got this. It is something stranger and more interesting: a conscious choice, made daily by a man who had every reason to despair, to defer full comprehension of his situation indefinitely so that he could continue to function inside it.
Two men, nineteen hundred years apart, in temperaments and circumstances that could not be more different — a Roman emperor at the end of an age, a Taiwanese-American engineer at the beginning of one — arrive at the same functional technique. Don’t look directly at how hard this is. Do the next thing. The next thing is tractable. The whole is not.
It is not quite optimism. It is not denial. It is closer to a kind of managed attention: a decision about what to hold in focus and what to keep at the edge of vision.
So why does this matter for you, on your Tuesday, with your 2nd (or 3rd) coffee?
The standard posture toward a new and consequential technology is to try to understand it before engaging with it. You assume that competence precedes use. You would not drive a car without lessons. You would not do brain surgery without training. Your whole adult life, the pattern has held: first you learn, then you do. So when a technology arrives that is described by its own creators as potentially transformative, you instinctively reach for the same sequence. Read about it first. Understand it first. Form a full opinion first. Become “skilled.” Then, maybe, use it.
This is a category error, and it’s kind of a fake-out.
Because generative AI isn’t a car. It’s not a medication. It is closer, as a category, to a language — and languages are not learned by reading about them. They are learned by using them badly, again and again, until you use them …just a little less badly. Children do not learn their native language (say, English in America) by studying syntax in a textbook; they begin by babbling things that are almost words, to people who mostly understand. The competence emerges through the practice. There is no pre-practice phase in which one becomes “qualified.” They just start.
The people you know who are getting real use out of these tools — and there are more of them than you think, quietly, without making a fuss — are almost without exception people who began before they were ready. They typed something clumsy into a chat window. They got something back that was wrong or weird or surprisingly useful. They tried to create an agentic workflow. Over weeks, not months, a kind of intuition assembled itself. They did not take a course. They did not read a book. They certainly did not develop PhD-level expertise on artificial general intelligence. They simply began, and the beginning metabolized into judgment.
I have definitely experienced this awkward stumbling in my own life. On November 30th, 2022, when ChatGPT was released to the public, I immediately felt that my life as an admissions consultant was over. I dived into learning computer science and machine learning (through programs like Harvard’s CS50, Stanford’s DeepLearning.ai, and YCombinator entrepreneur Matthieu Delac’s SheCodes). I started building, first websites, then small applications, and today, I’m wrestling daily with agentic workflows and trying to augment my facilities in step with all of the AI developments (definitely a tough ask when there’s a new drop from the frontier labs every two hours).
I tell you this not because I want you to do what I did — you almost certainly shouldn’t; most people cannot spare that kind of attention, and it is not required — but because in the time since, I have watched a lot of people approach these tools, and I have noticed a pattern. The people who do well with them are not the technically fluent. Some of the most technically fluent people I know are also the most paralyzed. The people who do well are the ones who gave themselves permission to use the tools badly for a while. They asked questions that were too vague and got answers that were too generic and tried again, a little less vague. They treated the first month as practice rather than performance. They did not wait to be ready.
You are not running Nvidia. You are not running Rome. You are running your Tuesday, and the folder in your head is getting heavier, and the thing being asked of you is smaller than either of theirs and also, in its own way, real. The move they both made is available to you. The move is to defer, deliberately and without guilt, the question of whether you are ready — to set it aside the way Marcus set aside the full scope of empire, the way Jensen sets aside the full arithmetic of starting a company — and to do the next small thing.
You do not need to understand what is happening in AI. You need to begin, a little, and let the understanding develop through the doing.
Jensen’s trick and Marcus’s discipline, translated for your Tuesday: the hardness of engaging with AI is real. Looking at it directly, from where you are now, will keep you from engaging. Begin anyway. The difficulty that looks paralyzing in advance becomes tractable in the doing.
How hard can it be?
