That 30% That Makes the Difference: What to Study Today to Work in Tech
Note: This is an AI-generated translation from my original Italian article: Quel 30% che fa la differenza: cosa studiare oggi per lavorare nell'informatica
A reflection on what has changed, what remains, and what I would advise someone starting out today.
My Happy Place
I was born in 1982. I received my first computer at age 5: an Amstrad CPC 464, with that green phosphor monitor and the built-in cassette recorder. By age 3 I could already read and write, and it didn't take me long to pick up the Locomotive BASIC manual and start writing my first lines of code.
I had no idea what an "IT career" was. I didn't know a software industry existed, I wasn't thinking about money, I wasn't thinking about the future. I only knew that screen was a place where I could create anything. You typed some words, pressed RUN, and something happened. It was magic. It was my magic.
There wasn't much money at home, and I'm still deeply grateful to my parents for that gift. I was an extremely curious child, the kind who takes everything apart, asks a thousand questions, never sits still. The Amstrad was the only thing capable of calming me down. I would sit in front of that screen and the world disappeared.
Then one day I discovered I could generate sounds. And what sounds. The CPC 464 had a sound chip, the AY-3-8912, three audio channels, and I started squeezing every bit of potential out of it. Frequencies, envelopes, modulations. I spent hours creating melodies and sound effects with just a few lines of BASIC, then trying to figure out how games produced those impossible sounds. Chiptune fascinated me then and continues to fascinate me today: making music with a computer remains one of the most beautiful things I know.
Meanwhile, the years passed. My friends started playing with the Amiga, with its 4,096-color graphics and that stereo sampled sound that seemed to come from another planet. I stayed with my CPC 464. But that limitation pushed me to dig deeper. When BASIC was no longer enough, I moved to Z80 assembly, using MAXAM as my assembler. LD A,&FF. CALL &BC77. PUSH HL, POP DE, JP NZ, loop. Instructions that directly manipulated the processor, memory, the video chip. Stuff that made you feel like you had total control of the machine, byte by byte.
It was the Z80 that led me to discover the demoscene. Those demos that ran on hardware identical to mine but produced effects that seemed impossible: scrollers, rasterbars, plasma, sprite multiplexing. People who pulled things out of the CPC that the designers themselves hadn't anticipated. To me, it was the purest form of hacking: taking a machine, understanding every one of its limits, and then surpassing them.
For me, pure programming is my happy place. It is freedom and control at the same time. It is a place where I am not afraid, because I describe and define the rules of my world and everything responds to logic. If something doesn't work, there is always a reason. There are no ambiguities, no "it depends," no interpretations. There's a bug, you find it, you fix it. There's a logic you want to express, you write it, it works.
This relationship with programming has defined my life. It gave me a profession, certainly, but above all it gave me a way of thinking. It taught me to break down problems, to look for causes before effects, to not settle for the first solution that works. It taught me patience and stubbornness.
And now, after nearly forty years, that relationship is transforming. Not ending. Transforming.
The Question Everyone Asks Me
"What should I study to work in tech?"
I get asked this often. Students, people looking to change careers, parents asking on behalf of their children. And every time I struggle to answer, for a reason they probably don't expect.
The problem isn't that I don't have opinions. I have far too many. The problem is that for me, computer science was never a career. It was a passion that became a job, and this probably makes me the wrong person to ask for career advice. I can't put myself in the shoes of someone who looks at this world and sees first and foremost a salary, stability, a career path. There's nothing wrong with that perspective — it's perfectly legitimate — but I don't start from there, and so my advice has always been a bit skewed.
For years, the most honest advice I gave was: follow a passion. Be curious. Be a hacker in the true sense of the word. Not the movie kind, but the kind who takes things apart to understand how they work, who isn't satisfied with just using a tool but wants to know what's underneath. Read other people's code, break things, rebuild them better.
Was it good advice? Maybe yes, maybe no. It worked for people like me — people who would have programmed anyway, even without a paycheck. But for the others? For those who simply wanted a good job in a growing field? I don't know.
What I do know is that today the answer to that question has changed radically. And for the first time, I feel I can give advice that applies to everyone: the passionate and the pragmatic alike.
The Elephant in the Room
Let's talk about AI. Or rather, let's talk about it seriously, without the apocalyptic tones and without the press release enthusiasm.
Perhaps some expect me to see artificial intelligence as a threat. Someone who has been programming since age 5, who made writing code his happy place — how can he not feel threatened by a machine that writes code?
The truth is I chased it from the very first moment. When the first language models capable of generating code arrived, I wasn't afraid. I was curious. The same curiosity as the child in front of the Amstrad. What can this do? How far can it go? How can I use it?
And now, after months of intensive, daily use, integrated into every aspect of my work, I feel like I have superpowers. That's not an exaggeration. Things that used to take hours now take minutes. Prototypes that used to take days now take shape in real time. I have a tireless collaborator who knows practically every language, every framework, every pattern. It doesn't complain, doesn't have bad days, doesn't get offended if I ask it to redo everything from scratch.
But — and this is a "but" as big as a house — it's not that way for everyone.
I see colleagues who hate it. Who experience it as a personal affront, as if someone were diminishing their work. I see people who use it poorly, who copy and paste output without understanding it and then are surprised when things break in production. And I see — and this is the most dangerous — people who think it no longer makes sense to learn to program.
All three of these reactions are wrong. But the third is the one that worries me most.
The 70% Problem
There's a concept that Addy Osmani — Google engineer and author of foundational texts on web development — has formulated, and which I think perfectly nails the point. He calls it "the 70% problem."
The idea is this: AI today is capable of generating about 70% of a software solution. The scaffolding, the boilerplate, the common patterns, standard implementations, basic documentation, mechanical tests. Everything that is predictable, repetitive, already seen. AI does it, and does it fast.
The remaining 30% is a different story.
That 30% is made of system architecture: deciding how the pieces fit together, not writing the individual pieces. It's made of edge cases — those boundary conditions no model anticipates because they require a deep understanding of the domain. It's made of security — not checklist security, but the kind that comes from the experience of having seen compromised systems and knowing where to look. It's made of decisions that require context, intuition, judgment.
And here's the paradox Osmani highlights: for junior developers, that 70% generated by AI seems like a miracle. "Look, I built a complete application in ten minutes!" For senior developers, that 70% is the easy part. The real work has always been in the 30%. And sometimes, fixing the 70% generated by AI takes more time than writing it from scratch.
This is the crucial point that anyone wanting to enter the tech world must understand: AI has not eliminated the need for competence. It has made it more important than ever.
The Knowledge Paradox
This is perhaps the most counterintuitive thing about this entire matter, and the one you should tattoo on your forehead if you're thinking about pursuing a career in tech.
The more you know, the more useful AI is to you. The less you know, the more dangerous AI is.
It sounds like a paradox, but think about it. When I ask AI to generate an authentication function, I don't write "create a login function." I write something like: "Implement an authentication system with JWT, refresh token rotation, rate limiting on failed attempts, bcrypt hashing, structured logging for the audit trail." I know what to ask because I know what's needed. I can evaluate the output because I know the correct patterns. I know where the AI will cut corners because I've cut them myself in the past and paid the consequences.
A junior who asks "create a login function" will get something that works. Technically. In a test environment. On a toy project. But in production, with real users, real attackers, real data? What they get is a ticking time bomb.
It's not AI's fault. The AI did exactly what it was asked. The problem is that the person asking didn't know what to ask. And they didn't know they didn't know, which is infinitely worse.
So the first piece of advice, the fundamental one, the one that comes before everything else: learn the fundamentals. Truly. Not the basics of "I did a YouTube tutorial and deployed an app on Vercel." The real fundamentals. How memory works. How a network works. What a process is, what a thread is, what a race condition is. How a database works under the hood — not just how to write a query.
AI doesn't make this knowledge obsolete. It makes it the most powerful force multiplier you've ever had.
The Evolution of Skills
If I had to draw a map of the skills that matter today — that famous 30% — I would divide it into two major families.
The first is skills that already existed but now matter more.
Software Architecture. Designing complex, scalable systems. I'm not talking about choosing between microservices and monolith because the blog you read this morning said so. I'm talking about understanding trade-offs, knowing why an architectural choice that works for Netflix doesn't work for your startup with three users, being able to think in terms of distributed systems, consistency, availability.
System Design. Defining components and their interactions. Being able to draw on a whiteboard how the pieces talk to each other. Being able to identify bottlenecks before they manifest. Being able to design for failure, because systems will fail, and the difference is how prepared you are.
Performance Tuning. Optimizing based on a deep understanding of what's actually happening, not on superstitions. Knowing how to read a profiler, understanding where time is spent, distinguishing between a CPU problem and an I/O problem. AI can suggest optimizations, but it doesn't know what matters in your specific context.
Security Analysis. And here a huge chapter opens up. AI doesn't just introduce solutions: it also introduces vulnerabilities. AI-generated code that no one has truly read and understood is potentially vulnerable code. Knowing where to look, knowing attack patterns, understanding risks — this skill hasn't just remained important, it has become critical.
The second family is new skills, born from interaction with AI itself.
Context Engineering. This is perhaps the most underrated skill. It's the ability to provide AI with the right context to obtain useful output. It's not trivial. It means understanding how the model reasons, what it needs to function well, how to structure information so it processes it most effectively. It's a bit like knowing how to write a good brief for a designer or a good specification for a development team, but with its own particularities.
Prompt Design. Related to the previous point, but distinct. Formulating precise and effective requests. It's not "talking to ChatGPT." It's understanding that the quality of input directly and often ruthlessly determines the quality of output. It's knowing when to be specific and when to be open, when to constrain and when to give freedom. It's a skill developed through practice and understanding of how these systems work.
Critical Thinking. Critical evaluation of AI output. This is perhaps the single most important skill. AI generates text and code with total confidence, even when it's wrong. It produces output that looks correct, that sounds correct, that has the form of something correct, but that can be completely wrong. Being able to distinguish, being able to doubt, being able to verify: this is the meta-skill that separates those who use AI from those who are used by it.
Verification. Systematic validation processes. Don't trust, verify. Or better, as they said during the Cold War: "trust, but verify." Trust AI for the 70% of mechanical work, but set up systematic processes to verify that 70% is correct. Automated tests, code review, security scanning, monitoring. The more AI produces, the more you need to verify.
From Craftsman to Orchestra Conductor
There's a metaphor that perfectly explains this change: the shift from code craftsman to orchestra conductor.
The code craftsman — the craft I've loved my entire life — is the one who takes a problem, opens the editor, and line by line builds the solution. They know every character they've written, every decision they've made, every compromise they've accepted. It's intimate, precise, total work.
The orchestra conductor doesn't play any instrument during the concert. But they know every instrument, know how every section should sound, know when something is off tempo or off pitch. Without them, the orchestra produces noise. With them, it produces music.
The future of the developer is there. You no longer write every line of code, but you must know how every line should sound. You no longer implement every pattern, but you must recognize when a pattern is wrong. You no longer debug line by line, but you must know where to look when something isn't working.
And pay attention: this doesn't mean "less work." It means different work. In a sense, it means more responsibility. Because when you wrote everything yourself, the scope of action was limited by your speed. Now that AI multiplies your productive capacity, the scope of action explodes, and with it the number of things that can go wrong if you don't know what you're doing.
Every engineer today is, in a certain sense, a manager. Not of people, but of AI systems. And like every good manager, they must know how to delegate, how to review, how to give direction, how to recognize when work is done well and when it's not.
What Hasn't Changed (And Won't Change)
So far I've talked about technical skills. But there's a deeper level I'd like to touch on, because it's what truly makes the difference, and that no university curriculum and no bootcamp will teach you.
Curiosity. That of the 5-year-old in front of the Amstrad. The kind that makes you take things apart not because you have to, but because you want to know. AI doesn't replace it. If you're not curious, AI is just a code generator you don't understand. If you are curious, AI is a portal to an infinite number of things to explore.
The ability to understand beyond the bits. Software doesn't exist in a vacuum. It exists to solve real people's problems. The most elegant code in the world is useless if it solves the wrong problem. Understanding the domain, talking to users, grasping needs: these are deeply human skills that AI cannot and will not be able to replace.
Empathy. Yes, empathy. In software, it matters. It's needed to design interfaces that people can actually use. It's needed to write error messages that help instead of frustrate. It's needed to understand that the colleague who wrote that horrible code isn't stupid — they were in a hurry, had constraints, had different information than yours.
Intuition. That feeling that something is wrong, even if you can't yet explain what. That instinct that tells you "this architecture won't hold up" before you've done the math. That accumulated experience that manifests as a sixth sense. AI has statistics. We have intuition. And often, when statistics and intuition say different things, it's intuition that's right.
The hacker spirit. I mean the real kind: the desire to understand how things work, to face a system and think "what if I did this?" Not to break things — though sometimes breaking is the best way to understand — but to explore, to find limits, to go beyond what the manual says you can do.
These qualities aren't studied in a book. They're cultivated through practice, through experience, through the willingness to put yourself on the line. And they are, today more than ever, the true competitive advantage.
So, What Should You Actually Study?
Okay, after all this philosophy, let's try to be practical. If I were 20 today and wanted to build a career in tech, what would I do?
First of all, the foundations. Real computer science. Algorithms and data structures — not to pass Google interviews, but to have a mental model of how computers solve problems. Operating systems, to understand what happens beneath your code. Networking, because everything is distributed now and if you don't understand TCP/IP, HTTP, and the basics of system-to-system communication, you're building on sand. Databases — not just SQL, but the principles: normalization, indexes, transactions, consistency.
Yes, I know, it sounds like the syllabus of a '90s university course. And that's exactly the point. These things don't age. They are the foundations upon which everything else rests, and they are exactly the knowledge that will allow you to use AI as a superpower instead of a crutch.
Then, learn to think in terms of systems. Not individual functions, not individual endpoints, but systems. How do you design something that needs to serve a thousand users? And a hundred thousand? And ten million? Where do you put the cache? How do you handle failures? How do you do monitoring? How do you deploy with zero downtime? These are the 30% questions, and they're the questions AI can't answer on its own.
Learn at least one language well. Really well. Not five languages at a surface level — one, in depth. Understand how it manages memory, how its type system works, what its strengths and limits are. When you truly know one language, learning others becomes natural. And when you ask AI to write code in that language, you'll immediately know whether it's writing good stuff or stuff that "works but isn't the right way."
Study security. Not as a specialization — as a baseline skill. OWASP Top 10, authentication and authorization principles, how cryptography works (not how to implement it — AI does that — but why it works and when to use it). Every line of AI-generated code you haven't verified from a security standpoint is a risk. And risks in production become incidents, and incidents cost money.
Learn to work with AI. And I mean truly work — not "ask ChatGPT to write my homework." Learn context engineering: how to structure information so the model gives you useful output. Learn prompt design: how to formulate requests that produce quality results. Above all, learn critical thinking applied to AI: how to evaluate, verify, and integrate what AI produces.
And finally, but not least, cultivate the human side. Learn to communicate. Learn to write (not code — prose). Learn to explain complex things to non-technical people. Learn to work in teams, to manage conflicts, to give and receive feedback. These are the skills AI will never touch, and they're the ones that will make the difference when you're in the interview, in the office, or deciding a system's architecture with your team.
My Most Honest Advice
I come back to where I started. I've always struggled to give career advice because for me this profession isn't a career. It's a part of who I am.
But today, for the first time, I have advice that applies to everyone. The passionate and the pragmatic. Dreamers and calculators.
Don't compete with AI. Become the person who knows how to use it.
The 70% of mechanical work — boilerplate code, CRUDs, configurations, standard documentation — is becoming a commodity. It's stuff AI does well and will do ever better. Investing your time to become the fastest at writing that kind of code is like training to run faster than a car. You can do it, but why?
The 30% — architecture, design, security, critical thinking, verification, domain understanding, the ability to ask the right questions — that is your territory. That's where your value grows, not shrinks, with the arrival of AI. That's where experience counts. That's where curiosity, empathy, and intuition make the difference.
That 5-year-old in front of the Amstrad knew nothing about careers, job markets, or economic prospects. He only knew he wanted to understand how that machine worked and what he could do with it.
Forty years later, with an artificial intelligence as a daily collaborator, the feeling is the same. The curiosity is the same. The desire to explore is the same. The tools have changed, enormously, but the spirit hasn't.
And that spirit, in the end, is the only thing you can't delegate to any machine.