Welcome to HackerNoon’s Meet the Writer Interview series, where we learn a bit more about the contributors that have written some of our favorite stories.


Let’s start! Tell us a bit about yourself (name, profession, and personal interests).

My name is Norm Bond. I help founders, operators and creators think clearly in an environment that keeps accelerating. My work tracks the intersection of tech, markets and meaning**.** I focus on what happens when systems become powerful faster than they become understandable.

I’ve spent years in marketing, publishing and digital systems. I began my career as an IBM marketing rep selling mid-range computers. So I’ve been around long enough to see multiple “content revolutions” come and go.

Outside of work, I enjoy slowing down. Beaches, deep reading, playing chess and conversations that go somewhere real.

Interesting! What was your latest Hackernoon Top story about?

My latest story, Slop Isn’t the Problem. It’s the Symptom, looks at why low-quality AI content exists in the first place.

https://hackernoon.com/slop-isnt-the-problem-its-the-symptom?embedable=true

The core argument is simple: blaming AI for bad output misses the point. Slop is usually a reflection of unclear thinking, weak incentives, or systems optimized for speed over signal. AI just makes those flaws visible faster.

Do you usually write on similar topics? If not, what do you usually write about?

Yes, this is very much in my lane. I write about invisible failure modes. Places where systems technically work but still underperform because meaning, trust or accountability hasn’t been designed. That shows up in AI, startups, markets, leadership and sometimes culture. The surface topic changes. The underlying pattern doesn’t.

Great! What is your usual writing routine like (if you have one?)

I don’t write on a schedule. Most of my writing starts as thinking. Notes. Friction. Questions that won’t leave me alone. Many of my pieces start as a single sentence I can’t ignore. When something keeps resurfacing, that’s usually my signal. Drafts are fast. Rewrites are slow. I care more about clarity than volume, and I stop when the idea says what it needs to say.

Being a writer in tech can be a challenge. It’s not often our main role, but an addition to another one. What is the biggest challenge you have when it comes to writing?

Resisting noise. There’s constant pressure to react, comment, publish and perform. The harder challenge is deciding what not to write about. Writing well in tech often means stepping back long enough to see patterns instead of chasing shiny objects.

What is the next thing you hope to achieve in your career?

AI is changing the surface area of almost every profession. My goal is to help people develop judgment and strategic clarity so they don’t just keep up, but choose better paths forward. I hope to keep building a body of work that helps people think better under pressure. That feels like the right work right now.

Wow, that’s admirable. Now, something more casual: What is your guilty pleasure of choice?

Strong coffee, spiked with rum slightly too late in the day, while reading something unrelated to my work.

Walking without headphones. It’s surprisingly effective at noticing when an idea is finished..and when it isn’t.

What can the Hacker Noon community expect to read from you next?

More writing on AI and human creativity. More systems-level thinking. Less tool hype. I’m especially interested in how creators, founders, and writers can maintain signal when output becomes cheap and noise becomes overwhelming.

What’s your opinion on HackerNoon as a platform for writers?

HackerNoon is totally unique for writers. It’s one of the few places where you can write something that doesn’t shout, doesn’t simplify for clicks, and still find an audience that’s genuinely right there with you. Here readers expect to engage, not just skim.

Thanks for taking time to join our “Meet the writer” series. It was a pleasure. Do you have any closing words?

Most systems don’t fail because they lack capability. They fail because no one designed how that capability would be understood. If something feels off but you can’t explain why, that’s usually where the real work is.