<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Small Bets, Strong Systems]]></title><description><![CDATA[I write field notes on using technology, systems and capital to build and run small, durable online businesses.]]></description><link>https://blog.samiralibabic.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 16:31:24 GMT</lastBuildDate><atom:link href="https://blog.samiralibabic.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Life After Work: Automation, Governance, and What Humans Are For]]></title><description><![CDATA[Life After Work: Automation, Governance, and What Humans Are For
What happens when most human labor is no longer needed?
Not the soft version, where tools make people a bit more productive. I mean the]]></description><link>https://blog.samiralibabic.com/life-after-work-automation-governance-and-what-humans-are-for</link><guid isPermaLink="true">https://blog.samiralibabic.com/life-after-work-automation-governance-and-what-humans-are-for</guid><category><![CDATA[AI]]></category><category><![CDATA[economics]]></category><category><![CDATA[society]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Fri, 17 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/life-after-work/cover.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Life After Work: Automation, Governance, and What Humans Are For</h1>
<p>What happens when most human labor is no longer needed?</p>
<p>Not the soft version, where tools make people a bit more productive. I mean the harder version. Machines and models run factories, logistics, back offices, customer support, large parts of software, and eventually much of the physical world too. Output is abundant. Human effort is optional in more and more domains.</p>
<p>Most discussions about this future stay stuck on jobs. Which jobs will survive? Which skills will still matter? Which professions are “safe” from AI?</p>
<p>That is the wrong center of gravity.</p>
<p>If labor stops being the main way people earn a living, the real questions are different. Who owns the machines? Who captures the rents? Who decides what automated systems are allowed to do? What happens to money, school, and meaning when work is no longer the organizing principle of life?</p>
<p>That is the conversation worth having.</p>
<h2>The real problem is not production</h2>
<p>If machines can do almost all work better and cheaper than humans, the economy does not first run into a production problem. It runs into a distribution problem.</p>
<p>A society can produce huge amounts of goods and services and still become unstable if most people no longer have purchasing power. Output alone is not enough. People need some claim on that output.</p>
<p>From there, three broad futures appear.</p>
<h3>1. Managed abundance</h3>
<p>Automation drives prices down across much of the economy. People do not rely mainly on wages anymore. Instead, purchasing power is preserved through some mix of dividends, public transfers, citizen funds, pension ownership, or other ways of recycling automation rents back into society.</p>
<p>In that world, markets still exist. But they float on top of a guaranteed floor. People are not fighting for basic survival. They are competing over location, prestige, access, experiences, and taste.</p>
<p>This is the most stable version of an automated future. It requires broad ownership, or at least broad participation in the gains.</p>
<h3>2. Forced redistribution</h3>
<p>If ownership stays concentrated, automation profits accrue to a narrow capital base while labor income erodes. The economy can still produce plenty, but demand weakens because too many people are cut out of the upside.</p>
<p>Eventually, governments intervene under pressure. Taxes rise. Windfall levies appear. Controls, subsidies, and emergency transfers spread. Redistribution happens, but late, unevenly, and with a lot more conflict than it needed to.</p>
<h3>3. Collapse</h3>
<p>If concentration combines with weak institutions, energy shocks, political breakdown, conflict, or catastrophic system failures, things get darker. The problem is no longer just inequality. It becomes institutional failure.</p>
<p>Supply chains break. Energy becomes unreliable. Trust drops. Black markets expand. Formal rules give way to local improvisation.</p>
<p>The most likely global outcome is not one universal model. It is a patchwork. Some places manage abundance well. Some lurch through forced redistribution. Some fail.</p>
<h2>We are not there yet</h2>
<p>None of this is inevitable, and none of it is fully here.</p>
<p>We are still in the early displacement phase.</p>
<p>Software and knowledge work are automating quickly. Text, media, support, analytics, and parts of programming are already being transformed. Physical work moves more slowly, because robotics still depends on sensing, dexterity, integration, energy, and cost curves that have not yet broken wide open.</p>
<p>That matters. It means we are not in a post-labor world today. We are in the messy middle, where some tasks disappear, some expand, and institutions lag far behind the capabilities of the tools.</p>
<p>This is why so much of the current debate feels confused. People are arguing from a future that has not fully arrived, while living inside a transition that is already destabilizing.</p>
<h2>Do we still need money if everything is abundant?</h2>
<p>Probably yes.</p>
<p>Money exists because scarcity exists and barter is inefficient. If automation drives the cost of many goods toward zero, then the role of money shrinks in those areas. But scarcity does not disappear completely.</p>
<p>Some things remain scarce no matter how smart machines get.</p>
<p>Land stays scarce. Prime locations stay scarce. Reliable energy stays scarce. Compute at the highest level stays scarce. Time, attention, trust, and status all stay scarce.</p>
<p>So even in a world where food, entertainment, and many services are cheap or effectively guaranteed, some system is still needed to allocate the scarce parts. It may look like money. It may look more like energy credits, compute credits, access rights, or priority tokens. But the underlying logic survives.</p>
<p>Trade does not disappear. It shifts.</p>
<p>Instead of being mainly about survival, more of it becomes about differentiation. Better location. Better access. Better curation. Better experiences. Better reputation.</p>
<h2>Why school breaks when it is built for workers</h2>
<p>Modern mass education was shaped by the needs of industrial society.</p>
<p>Show up on time. Follow instructions. Sit still. Learn standard material. Move through a pipeline. Become a useful worker.</p>
<p>That model was always narrower than it pretended to be, but automation makes the mismatch impossible to ignore.</p>
<p>If tasks change faster than curricula, and if machines outperform humans at routine execution, then education cannot be mainly about preparing people for fixed jobs. It has to prepare them for agency.</p>
<p>That means the center of education shifts toward:</p>
<ul>
<li>reading, writing, argument, and media judgment</li>
<li>probability, statistics, optimization, and decision-making</li>
<li>computation, data, and AI literacy</li>
<li>scientific thinking and causal reasoning</li>
<li>ethics, safety, institutions, and law</li>
<li>collaboration, leadership, and conflict resolution</li>
<li>building real things, testing them, defending them, and improving them</li>
</ul>
<p>In an automated world, a serious project is no longer just “build something.” It is “design and govern an autonomous system for a real stakeholder.”</p>
<p>What is the goal? What are the constraints? What can it do automatically? What needs approval? What happens when it fails? Who is accountable?</p>
<p>That is a much more durable education than training someone to fit into a job ladder that may no longer exist.</p>
<h2>“AGI can do all that too”</h2>
<p>This is the strongest objection, and it is a good one.</p>
<p>If AGI or ASI becomes capable enough, why would it not also handle policy, ethics, governance, curation, and crisis response? Why assume humans remain central in any of those domains?</p>
<p>At the level of capability, it might not be wrong. Super-capable systems could model consequences better, search policy options better, and outperform humans in many kinds of judgment.</p>
<p>The important distinction is not capability. It is legitimacy and responsibility.</p>
<p>Even if a system can do everything, three gaps remain.</p>
<p>First, the <strong>objective gap</strong>. What should the goal be? How should different values be weighed? How should tradeoffs be made between freedom and safety, equality and efficiency, speed and fairness? There is no single objective answer waiting to be discovered.</p>
<p>Second, the <strong>legitimacy gap</strong>. Who has the right to decide? Authority does not come from intelligence alone. It comes from consent, process, and accepted rules.</p>
<p>Third, the <strong>responsibility gap</strong>. When harms occur, who answers for them? Who pays damages, gets removed, loses power, or is held accountable?</p>
<p>AI can propose goals, model outcomes, and even recommend tradeoffs. But it cannot, on its own, grant itself legitimate authority. It cannot be the ultimate source of political or moral permission unless humans first decide to hand that over.</p>
<p>That means the enduring human role is not “doing the work the machine cannot do.” It is being the legitimate principal of systems that do the work.</p>
<p>That includes writing the charters, setting the red lines, defining rights, deciding where discretion lives, and owning the consequences when things go wrong.</p>
<h2>So do we all just become governors?</h2>
<p>Not exactly.</p>
<p>If legitimacy remains human, then yes, society becomes more governance-heavy than it is today. But that does not mean everyone lives in assemblies all day.</p>
<p>A more plausible model is governance by default, service by choice.</p>
<p>Everyone holds basic rights and some light civic duties. You may vote occasionally on major questions. You may update your preferences. You may do rare jury or panel duty.</p>
<p>A smaller share of people, for a limited time, does deeper service through sortition, election, or appointment. They sit on standards boards, audit panels, or citizen assemblies. Their terms are short. Their power is constrained. Their decisions are logged and reviewable.</p>
<p>Professionals still exist too. Energy systems, health systems, biosafety, city management, and critical infrastructure all need competent operators. But those operators increasingly work under tighter charters, stronger audits, and more explicit public legitimacy.</p>
<p>Automation supports all of this. It drafts, simulates, checks compliance, summarizes tradeoffs, and keeps logs. Humans do not process every detail manually. They remain the ones who authorize, constrain, and answer.</p>
<h2>What do people do with all the free time?</h2>
<p>This is where the conversation often gets strangely shallow.</p>
<p>People say, “If work goes away and needs are covered, then everyone will just do what they love.”</p>
<p>Maybe. But we already live in a world where many people have some free time, and a large part of that time gets absorbed by low-effort, high-engagement loops. Infinite feeds. Endless videos. Passive entertainment. Doomscrolling.</p>
<p>So abundance alone does not automatically produce flourishing.</p>
<p>The scarce things in an abundant world are not just material. They are psychological and social. Time. Attention. Taste. Trust. Belonging. Meaning. Status. Real challenge.</p>
<p>This suggests that life after work is not organized around a job, but around a portfolio of meaning.</p>
<p>Some combination of:</p>
<ul>
<li>creation, making art, tools, research, or public artifacts</li>
<li>care, raising children, supporting family, mentoring, coaching</li>
<li>stewardship, maintaining places, systems, commons, archives, habitats</li>
<li>discovery, learning, science, travel, field work, serious curiosity</li>
<li>governance, your share of collective decision-making and institutional maintenance</li>
<li>play, sport, ritual, games, celebration, competition</li>
</ul>
<p>The key problem is not lack of options. It is lack of self-chosen constraint.</p>
<p>Without structure, abundance can dissolve into drift. With structure, it can become a civilization of creation, care, stewardship, and play.</p>
<h2>The doomscrolling objection is real</h2>
<p>There is no point romanticizing this.</p>
<p>A lot of people do not naturally move from easy consumption into mastery, contribution, or harder forms of play. Today’s platforms are designed to prevent that. They are built around variable rewards, infinite scroll, social proof, and low-friction repetition.</p>
<p>Boredom alone does not beat design.</p>
<p>If society wants more people in active modes rather than passive loops, it will need counter-design.</p>
<p>Less autoplay. More friction on passive feed loops. More small on-ramps into active participation. More social scaffolding. More visible ladders tied to outputs, not time spent consuming. Better rituals. Better communities. Better default environments.</p>
<p>This matters for education too. A future-ready curriculum should not just teach people how to use powerful systems. It should also teach them how their own attention is being captured, and how to build lives that are not consumed by passive stimulation.</p>
<h2>What to do in the next five years</h2>
<p>All of this is philosophical until it becomes practical.</p>
<p>If you want to benefit from this transition over the next five years, the strongest immediate lever is not some abstract future job category. It is agents and automations.</p>
<p>Not as hype. As real systems that save time, reduce errors, and operate inside clear guardrails.</p>
<p>A good starting move is simple.</p>
<p>Pick one narrow digital workflow. Map it end to end. Measure the current baseline. Then build an automation around it with a minimal architecture.</p>
<p>A trigger. A planner. Deterministic tools. Guardrails. Human approval for risky actions. An executor. An audit log. A rollback path.</p>
<p>Run it in shadow mode first. Then deploy it on a safe slice of real work. Measure hours saved, intervention rate, defect rate, and user trust.</p>
<p>If you can prove, in concrete terms, that a workflow now takes less time, makes fewer mistakes, and stays under control, you have something valuable.</p>
<p>Do that a few times and you are no longer just “using AI.” You are learning how to design and govern autonomous systems, which is one of the most useful skills in this transition.</p>
<p>At the same time, build around the bottlenecks that stay scarce.</p>
<p>Accumulate some exposure to broad equity, infrastructure, energy, and compute. Build small datasets with clear rights. Build distribution you control, such as an email list, community, or niche audience. Build a reputation for trustworthy work, not flashy demos.</p>
<p>And build your governance muscles.</p>
<p>You do not need to become a politician. But it helps to get comfortable with contracts, licenses, data rights, privacy, risk, and the basic mechanics of allocating scarce resources fairly.</p>
<p>The future likely rewards people who can set objectives clearly, constrain systems responsibly, and be trusted when something breaks.</p>
<h2>The real shift</h2>
<p>If automation keeps advancing, then the center of gravity moves.</p>
<p>From doing tasks, to deciding what should be done.</p>
<p>From competing on labor, to participating in ownership and stewardship.</p>
<p>From following systems, to governing them.</p>
<p>From organizing life around jobs, to organizing it around meaning, responsibility, and chosen forms of contribution.</p>
<p>That does not mean humans become irrelevant. It means the basis of human relevance changes.</p>
<p>The question is not whether there will still be something for people to do.</p>
<p>There will be.</p>
<p>The question is whether we design a society where people have a real claim on abundance, real legitimacy over the systems that shape their lives, and enough structure and purpose to do something better with freedom than merely scroll through it.</p>
<p>That is the real future-of-work debate.</p>
<p>It is not really about work at all.</p>
]]></content:encoded></item><item><title><![CDATA[Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like]]></title><description><![CDATA[Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like
Most people treat metrics like Domain Rating and Domain Authority as if they measured real world authority. A tool shows a]]></description><link>https://blog.samiralibabic.com/why-dr-and-da-aren-t-authority-and-what-user-governed-search-might-look-like</link><guid isPermaLink="true">https://blog.samiralibabic.com/why-dr-and-da-aren-t-authority-and-what-user-governed-search-might-look-like</guid><category><![CDATA[SEO]]></category><category><![CDATA[search]]></category><category><![CDATA[marketing]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Thu, 16 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/user-governed-search-and-dr-da-metrics/cover.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like</h1>
<p>Most people treat metrics like Domain Rating and Domain Authority as if they measured real world authority. A tool shows a number from 0 to 100 and it feels like a direct representation of how much Google trusts a site.</p>
<p>That is not what these metrics actually are.</p>
<p>In this post I will explain what DR and DA really measure, why they are structurally biased toward big brands and old sites, and why even Google itself has moved far beyond simple link based thinking. I will also explore a thought experiment for a different kind of search system where users have explicit power over rankings instead of everything being controlled by a black box.</p>
<h2>What DR and DA really measure</h2>
<p>Domain Rating from Ahrefs and Domain Authority from Moz are both proprietary scores. They look similar on the surface. You get a number between 0 and 100 and higher is better. They are also logarithmic, which means that moving from 10 to 20 is much easier than moving from 70 to 80.</p>
<p>Under the hood both metrics are basically trying to answer one question.</p>
<p>How strong is this site in terms of backlinks compared to everything else in our index.</p>
<p>They do not look at your content quality. They do not know how fast your site is. They do not see your conversion rate. They do not know how real users behave on your pages. They are only modeling links.</p>
<p>The tools crawl the web and build their own independent link graphs. Then they estimate the relative strength of each domain based on how many other domains link to it, how strong those domains are, and how generously they link out. That last piece matters more than most people think.</p>
<h2>Why the best links often come from stingy sites</h2>
<p>If you read the original PageRank paper you will find a simple idea at the core of the algorithm. Each page can pass on only a limited amount of value through its outgoing links. The more links it has, the less value each individual link can carry.</p>
<p>Most modern link based metrics behave in a similar way because the idea is still valid.</p>
<p>A link from a page that has three hand picked outbound links is worth much more than a link from a page that links out to fifty random sites. The first is selective and editorial. The second looks like a directory or a link farm.</p>
<p>In practice this means that the best links often come from sites that rarely link to anyone. Think of small but respected niche publications, professional associations, or individual experts in a field. They may not have the highest DR in your tool, but when they do link to you it sends a strong signal.</p>
<h2>The limitations of DR and DA</h2>
<p>Once you understand that DR and DA are just relative backlink strength scores, their limitations become obvious.</p>
<p>First, they are biased by the tool index. Ahrefs sees a different portion of the web than Moz. If a link is not in their index, it does not exist for the metric.</p>
<p>Second, they are one dimensional. They collapse everything into a single number that ignores on page quality, user behavior, technical health, brand, and intent matching.</p>
<p>Third, they are relative and moving. If the overall web graph changes, your number can move even if you do nothing. When another site gains or loses powerful links, it shifts the scale a bit for everyone.</p>
<p>Fourth, they are easy to misinterpret. People celebrate small jumps as major wins and feel stuck when the metric does not move, even while their real traffic and rankings do.</p>
<p>None of this makes DR or DA useless. It simply means they are rough diagnostic tools rather than a measure of true authority.</p>
<h2>How Google likely models links today</h2>
<p>Google still uses ideas that look like PageRank, but they are embedded in a much more complex system.</p>
<p>Links are no longer treated as generic votes.</p>
<p>A modern link signal is probably weighted by several factors at once.</p>
<p>Who is linking. Is the source part of a trusted seed set, and is it close to other known authorities.</p>
<p>About what. How topically relevant is the linking page and its domain to the target. Two sites in the same niche likely reinforce each other more than a random cross niche link.</p>
<p>From where in the page. An in content editorial link carries more weight than a footer or sidebar link.</p>
<p>How many other links are present. The stingy versus generous site behavior appears again.</p>
<p>How users behave. Do people actually click the link. Do they stay on the destination page or bounce right back to the search results. Does the link appear on pages that users generally trust.</p>
<p>How stable the link is. A link that exists for years and keeps sending traffic looks more natural than a short lived spike that appears and then disappears.</p>
<p>On top of this, Google has moved to entity based and semantic models. Instead of just matching text strings, it tries to understand which entities and topics a site is about and how they relate to each other across the web.</p>
<h2>Why this favors big brands and incumbents</h2>
<p>All of these layers create structural advantages for large, established brands.</p>
<p>Big brands are often part of the trusted seed sets in link based models. Government sites, major media companies, large universities, and leading associations all sit very close to the center of the trust graph.</p>
<p>They collect links naturally simply by being known. Journalists link to them. Bloggers reference them. People search for the brand by name, which further trains the system that this site is trustworthy and relevant.</p>
<p>They also have enormous behavioral data behind them. Many users click their results by default. They rarely look suspicious in terms of link velocity or anchor text, because the mentions are organic.</p>
<p>New sites have the opposite starting position.</p>
<p>They sit far away from the trust center. They lack branded searches. They have very little historical behavior. Any active link building they do looks more like promotion than like passive earning, which makes it easier for algorithms to discount low quality efforts.</p>
<p>From the point of view of a risk averse search engine this bias is rational. If the choice is between sending users to a known brand with a long track record, and a fresh unknown domain with a handful of links, the safe bet is obvious.</p>
<p>The price is that it becomes much harder for new players to break through, even when they are objectively better.</p>
<h2>Is there a search model where users really decide</h2>
<p>If Google and similar engines are structurally biased toward incumbents, it is natural to ask whether there is a different model where users have more explicit control over what ranks.</p>
<p>Social platforms are one partial answer. Reddit, StackOverflow, YouTube, and TikTok all use some form of user feedback to rank content.</p>
<p>Votes, likes, watch time, comments, and shares all shape what surfaces. This makes it easier for new content to break through and for communities to curate what matters to them.</p>
<p>However, these systems work inside closed platforms. They do not operate on the open web at the scale that Google does.</p>
<h2>A thought experiment user governed search with one scarce vote</h2>
<p>Imagine a search system that tries to bring some of this community power into web search.</p>
<p>Every user has a verified profile. There are no anonymous accounts and no easy bot swarms.</p>
<p>Every user gets one meaningful vote per month. Not per query and not per page, but one unit of scarce approval they can assign anywhere on the web.</p>
<p>If you use your vote on a page, you are saying that this page really helped you and that you want more people to find it.</p>
<p>Over time some pages and sites would accumulate these scarce votes. The search system could combine its own relevance and quality models with this thin but highly trusted signal.</p>
<h2>Pros of this model</h2>
<p>Scarcity forces seriousness. With only one vote per month, most people will not waste it on mediocre content or on manipulative campaigns.</p>
<p>Verification makes spam much harder. Votes would be tied to real humans, or at least to strongly verified identities, which removes whole categories of cheap manipulation.</p>
<p>It creates a public web of trust. Everyone can see which pages and sites have earned support across many independent users.</p>
<p>It rewards depth over volume. Publishers cannot brute force their way to visibility with hundreds of thin pages. They need a smaller number of truly excellent resources that people feel compelled to support.</p>
<h2>Cons and challenges</h2>
<p>Verification is hard. A global system that verifies everyone raises privacy, political, and logistical problems. Someone has to decide what counts as a real person.</p>
<p>Cold start is severe. New sites begin with nothing and might struggle to get early visibility in order to even earn votes. Early winners could lock in their advantage.</p>
<p>Herd behavior is real. Even with scarce votes, people might follow influencers or social pressure instead of their own judgment. Popularity could still dominate quality.</p>
<p>Most users will not participate deeply. Many people simply want answers and will never bother to spend their vote, or will use it randomly at the end of the month.</p>
<p>Global search scale is brutal. This kind of governance is far more realistic for smaller vertical search engines than for a universal one that indexes the entire web.</p>
<h2>Where a user governed model could work</h2>
<p>Although this system is unlikely to replace Google, it could work very well in narrower domains where expertise and trust matter more than breadth.</p>
<p>Examples include medical information, legal research, scientific literature, niche professional communities, and internal company knowledge bases.</p>
<p>In those contexts verified participants have strong incentives to protect quality and reputation. The index is smaller. The community is more coherent. A scarce vote has more meaning.</p>
<h2>Practical takeaways for people doing SEO today</h2>
<p>First, treat DR and DA as useful but limited diagnostic tools. They can help you compare backlink profiles at a glance and spot large movements, but they are not the score that Google cares about.</p>
<p>Second, assume that Google sees much more than your link tools do. It has its own link graph, its own behavior data, and its own semantic understanding of content and entities.</p>
<p>Third, accept that the system favors incumbents and design around it. New sites need sharper positioning, better content in narrower niches, and a focus on earning a few truly excellent links from highly selective and relevant sources.</p>
<p>Fourth, remember that at the end of the chain there is still a human. If you consistently create pages that solve real problems for real people, and do enough distribution for those people to discover and remember you, then over time every metric, including the proprietary ones, will reflect that reality.</p>
<p>It just takes longer than most dashboards suggest.</p>
]]></content:encoded></item><item><title><![CDATA[LinkTracker Post-Mortem]]></title><description><![CDATA[LinkTracker Post-Mortem
TL;DR
LinkTracker.info was a small side project I built as a link shortener and click tracker. It slowly grew to ~3,000 organic Google clicks over 12 months and ranked on page ]]></description><link>https://blog.samiralibabic.com/linktracker-post-mortem</link><guid isPermaLink="true">https://blog.samiralibabic.com/linktracker-post-mortem</guid><category><![CDATA[SaaS]]></category><category><![CDATA[SEO]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Wed, 15 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/linktracker-post-mortem/cover.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>LinkTracker Post-Mortem</h1>
<h2>TL;DR</h2>
<p>LinkTracker.info was a small side project I built as a link shortener and click tracker. It slowly grew to ~3,000 organic Google clicks over 12 months and ranked on page 1 for terms like "link tracker". Despite that SEO traction, usage and retention remained very low: 557 sign‑ups, 583 links created in total, and only 23 active users in the last 30 days.</p>
<p>A deep dive into usage showed that most of the volume came from adult/affiliate and spammy use cases I don’t want to support. The handful of legitimate small businesses weren’t enough to justify more development, marketing, or support.</p>
<p>I’ve decided to shut LinkTracker down. Existing short links will keep redirecting until <strong>April 30, 2026</strong>. After that, they will stop working. This post explains what I built, what actually happened, and why I chose to kill it instead of trying to turn it into a product.</p>
<hr />
<h2>Background: Why I Built LinkTracker</h2>
<p>The original idea for LinkTracker was simple:</p>
<blockquote>
<p>"Never share a link again without knowing if anyone has clicked on it."</p>
</blockquote>
<p>I wanted a lightweight, privacy‑respecting tool where you could:</p>
<ul>
<li>Shorten links.</li>
<li>Track clicks per link.</li>
<li>Organize links across campaigns.</li>
</ul>
<p>There are many link shorteners and analytics tools already, but I thought there was still room for something simple, clean, and focused on indie founders and small businesses.</p>
<p>I built it as a side project and deployed it on Vercel’s hobby tier. The app had:</p>
<ul>
<li>Email magic‑link authentication.</li>
<li>A basic dashboard with a table of links.</li>
<li>A simple click counter for each short URL.</li>
<li>A presentable marketing homepage.</li>
</ul>
<p>Then I mostly left it alone.</p>
<hr />
<h2>What Actually Happened</h2>
<p>Because I never wired up proper product analytics, the only reliable external data source I had was <strong>Google Search Console (GSC)</strong>.</p>
<h3>SEO Performance</h3>
<p>Over the last 12 months, GSC reported:</p>
<ul>
<li><strong>3,040 clicks</strong></li>
<li><strong>32,300 impressions</strong></li>
<li>Average CTR around <strong>9–12%</strong></li>
<li>Average position improving to about <strong>7.4</strong> in the last 28 days</li>
</ul>
<p>Most of the traffic came from a single family of queries:</p>
<ul>
<li>"link tracker" – 2,207 clicks, 22,537 impressions (12‑month window)</li>
<li>"linktracker" – 504 clicks</li>
<li>plus various typos: "link traker", "linktracer", "tracker link", etc.</li>
</ul>
<p>So SEO‑wise, LinkTracker accidentally did pretty well for a neglected side project.</p>
<h3>Product Usage Metrics</h3>
<p>Looking inside the app database, the core stats at shutdown time were:</p>
<ul>
<li><p><strong>Total registered users:</strong> 557</p>
</li>
<li><p><strong>Users who created at least one link:</strong> 314</p>
</li>
<li><p><strong>Total links created:</strong> 583</p>
</li>
<li><p><strong>Active users (last 30 days):</strong> 23</p>
<ul>
<li>Here, "active" means they created at least one link in that period.</li>
</ul>
</li>
</ul>
<p>Some quick implications:</p>
<ul>
<li>About <strong>56%</strong> of sign‑ups ever created a link at all (314/557).</li>
<li>The average is only <strong>1.86 links per user</strong> among those who used it at least once.</li>
<li>Only <strong>4% of all sign‑ups</strong> (23/557) were active in the last month.</li>
</ul>
<p>In other words, most people tried the product once, created one link, and never came back.</p>
<h3>Onboarding Friction</h3>
<p>My onboarding flow didn’t help:</p>
<ul>
<li>To use the tool, you had to <strong>sign up first</strong> via email magic link.</li>
<li>Only after confirming your email could you see the “enter URL and shorten” form.</li>
</ul>
<p>For a utility tool discovered via a query like "link tracker", this is the wrong order. People want to paste a URL and immediately see something happen. I made them commit before showing any value.</p>
<p>We discussed the idea of flipping this flow (public "paste URL → track" without signup) to test whether conversion would grow from ~6% to something more meaningful. But by the time I looked at the data and the type of users coming in, the conclusion was that the underlying product wasn’t worth that experiment anymore.</p>
<hr />
<h2>The Power User Deep Dive: Who Was Actually Using It?</h2>
<p>I pulled the top 10 "power users" by click volume and inspected their domains and patterns. That analysis split them into two main groups.</p>
<h3>1. Adult / Affiliate / SEO Spam Cluster</h3>
<p>Several users:</p>
<ul>
<li>Used email addresses that look like throwaway accounts.</li>
<li>Created multiple links to domains like <code>nakedmilfs...</code>, <code>loosegirlsh...</code>, <code>naughtytrashy...</code>.</li>
<li>Rotated through 5+ domains per user.</li>
<li>Showed a suspicious traffic geography pattern: <strong>US (~60%) → Korea → Singapore → Germany</strong>.</li>
</ul>
<p>Another pair of users drove thousands of clicks to gibberish domains like <code>liservantuid.com</code>, <code>mistfabulous.com</code>, exhibiting the same geo pattern.</p>
<p>These accounts were almost certainly:</p>
<ul>
<li>Affiliate arbitrage.</li>
<li>Automated traffic.</li>
<li>Link farms / spam networks.</li>
</ul>
<p>They produced a large portion of the total click volume, but they are <strong>not</strong> the kind of "customers" I want or can monetize safely.</p>
<h3>2. A Few Legitimate SMB / Creator Users</h3>
<p>There were a handful of "real" users:</p>
<ul>
<li>A car rental / sales business in Chiang Mai with one link and 19k+ clicks.</li>
<li>A few real estate or local business use cases.</li>
</ul>
<p>These looked like genuine small businesses using LinkTracker in the way I originally imagined: one or a few core links with trackable clicks.</p>
<p>However, there were <strong>very few</strong> of them, and they were not on any paid plan (there was no pricing at all).</p>
<h3>3. File‑Sharing / Piracy‑Like Usage</h3>
<p>One user drove thousands of clicks to <code>mega.nz</code> and YouTube. This likely involved sharing large files or questionable content. That’s another risk category I’m not interested in building a product around.</p>
<hr />
<h2>Infrastructure, Performance and Reliability</h2>
<p>LinkTracker ran on Vercel’s hobby plan. Usage stats over a 30‑day window looked like this:</p>
<ul>
<li><strong>~89k edge requests</strong></li>
<li><strong>~18k function invocations</strong></li>
<li><strong>1.5% error rate</strong> over a 6‑hour slice</li>
<li>Firewall logs showing <strong>thousands of bot challenges and a hundred+ denials</strong></li>
</ul>
<p>In practice, that meant:</p>
<ul>
<li>The app sometimes felt <strong>slow</strong> or returned <strong>HTTP 500</strong> errors.</li>
<li>Bot traffic and abusive usage were already putting pressure on a hobby‑tier setup.</li>
</ul>
<p>I could have invested time in:</p>
<ul>
<li>Better logging and error handling.</li>
<li>Rate limiting.</li>
<li>Bot protection and blocklists.</li>
<li>Possibly upgrading the plan.</li>
</ul>
<p>But those are only worth solving if the product itself is promising. The usage and user mix said otherwise.</p>
<hr />
<h2>Why I Decided to Shut It Down</h2>
<h3>1. Weak Retention and Engagement</h3>
<p>Despite ranking for high‑intent queries and getting hundreds of visits per month, the core metrics were poor:</p>
<ul>
<li>Only 23 users created a link in the last 30 days.</li>
<li>Most users created <strong>one link ever</strong> and never came back.</li>
<li>There were no signs of a healthy base of legitimate power users.</li>
</ul>
<p>These numbers are not the foundation of a sustainable SaaS, and they hadn’t budged in a long time.</p>
<h3>2. The Wrong Kind of Power Users</h3>
<p>The bulk of my "traffic success" came from use cases I don’t want:</p>
<ul>
<li>Adult content.</li>
<li>Spammy SEO networks.</li>
<li>Possible piracy/file‑sharing traffic.</li>
</ul>
<p>I could try to combat this with heavy moderation, abuse detection, and legal risk management, but that’s not why I build products.</p>
<h3>3. Crowded Market and No Clear Differentiation</h3>
<p>The link tracking space is crowded:</p>
<ul>
<li>Bitly, Rebrandly, and many others cover basic shortening.</li>
<li>More advanced tools provide deep analytics, integrations, and branded domains.</li>
</ul>
<p>LinkTracker didn’t have a sharp, differentiated angle. It was "another link tracker" that required signup first, and I never invested enough to make it stand out.</p>
<h3>4. Opportunity Cost</h3>
<p>Every hour I might spend fixing LinkTracker is an hour not spent on my more promising projects (like PrintOnDemandBusiness.com and other experiments).</p>
<p>Given the numbers, the right move is to <strong>stop</strong>, not to sink more time into a project that has already told me it doesn’t want to be a business.</p>
<h3>5. Clear Decision Criteria</h3>
<p>By the time I revisited LinkTracker, I had enough data to say:</p>
<ul>
<li>Organic traffic: decent.</li>
<li>Retention &amp; active users: poor.</li>
<li>User mix: heavily skewed to abuse/spam.</li>
<li>Path to monetization: very weak.</li>
</ul>
<p>That combination met my internal threshold for a <strong>decisive shutdown</strong> rather than passive neglect.</p>
<hr />
<h2>How the Shutdown Will Work</h2>
<p>I chose a <strong>graceful shutdown</strong> instead of simply killing it overnight.</p>
<ul>
<li><p><strong>Effective immediately:</strong></p>
<ul>
<li>No new accounts.</li>
<li>No new links.</li>
</ul>
</li>
<li><p><strong>Until April 30, 2026:</strong></p>
<ul>
<li>All existing short URLs keep redirecting to their original targets.</li>
<li>Users can export their links (short URL, destination, total clicks) as CSV and migrate elsewhere.</li>
</ul>
</li>
<li><p><strong>After April 30, 2026:</strong></p>
<ul>
<li>Short URLs will stop redirecting.</li>
<li>They will instead show a simple shutdown page explaining that LinkTracker has been discontinued.</li>
</ul>
</li>
</ul>
<p>After a reasonable data retention period, I will:</p>
<ul>
<li>Delete or anonymize user accounts and link data.</li>
<li>Decommission the Vercel project and database.</li>
<li>Remove or repoint DNS for <code>linktracker.info</code>.</li>
</ul>
<p>Users who rely on LinkTracker should move to another provider well before the cutoff date.</p>
<hr />
<h2>What I Would Do Differently Next Time</h2>
<p>Looking back, there are several lessons I’m taking into future projects.</p>
<h3>1. Start With a Narrow, Explicit Use Case</h3>
<p>"Anyone who shares links" is not a target market. A better starting point would have been something like:</p>
<ul>
<li>"Real estate agents who want to track which listing links perform best."</li>
<li>"Indie SaaS founders tracking clicks from their email signatures and social profiles."</li>
</ul>
<p>Without a clear user story, the product naturally attracted the kinds of users who exploit generic tools: spammers and affiliates.</p>
<h3>2. Show Value Before Asking for Commitment</h3>
<p>For utility tools, the flow should almost always be:</p>
<blockquote>
<p>Try → See value → Then sign up.</p>
</blockquote>
<p>My signup‑first magic‑link flow reversed that and killed a lot of potential experimentation from legitimate users.</p>
<p>Next time I’d:</p>
<ul>
<li>Let anyone paste a URL and get a working short link.</li>
<li>Show basic stats on a public or temporary page.</li>
<li>Offer signup only to save history or unlock extra features.</li>
</ul>
<h3>3. Instrument Proper Analytics Early</h3>
<p>Relying solely on Google Search Console was a mistake. I should have, at minimum:</p>
<ul>
<li><p>Tracked basic events:</p>
<ul>
<li>Landing page views.</li>
<li>Link creations.</li>
<li>Signups after creating a link.</li>
<li>Returning users.</li>
</ul>
</li>
<li><p>Built a tiny dashboard with key ratios like:</p>
<ul>
<li><code>links_created / visitors</code>.</li>
<li><code>active_users / total_users</code>.</li>
</ul>
</li>
</ul>
<p>With that in place, I would have had a clear signal much earlier that retention was weak.</p>
<h3>4. Decide on Abuse Policy From Day One</h3>
<p>Any product that can be used at scale for free will attract unwanted usage.</p>
<p>Next time, I will:</p>
<ul>
<li>Decide upfront what categories of usage I’m willing to support.</li>
<li>Implement basic rate limiting and abuse detection early.</li>
<li>Be prepared to block entire classes of domains or content.</li>
</ul>
<h3>5. Set Explicit Kill or Commit Milestones</h3>
<p>I let LinkTracker sit in an undefined state for too long: not clearly alive, not clearly dead.</p>
<p>A better approach would have been to define milestones like:</p>
<ul>
<li>"If we don’t reach 100 legitimate monthly active users within 6 months of launch, we either pivot or shut down."</li>
</ul>
<p>and then actually act on that.</p>
<hr />
<h2>Closing Thoughts</h2>
<p>LinkTracker was never a big product. It was a side project that accidentally picked up SEO traction, attracted the wrong kind of power users, and never turned into something worth serious investment.</p>
<p>Shutting it down is the right decision. It frees up my time and attention for projects where the user base, usage patterns, and potential upside are much clearer.</p>
<p>I’m still glad I built it. It validated that I can rank a tool for a competitive keyword like "link tracker" with relatively little effort, and it reminded me how important it is to:</p>
<ul>
<li>Know exactly who you’re building for.</li>
<li>Measure the right things.</li>
<li>Kill projects decisively when they don’t cross the bar.</li>
</ul>
<p>If you’re a LinkTracker user, thank you for trying it. Please migrate your links before April 30, 2026. There are plenty of good alternatives out there—and I’ll be focusing my energy on building things that are a much better fit for the kind of users I actually want to serve.</p>
]]></content:encoded></item><item><title><![CDATA[From Kill the Ums to Voice Infrastructure]]></title><description><![CDATA[From Kill the Ums to Voice Infrastructure
A post-mortem on my filler-removal app

1. Context
In early 2025 I built a web app that takes an audio recording, removes filler words like "um", "uh", and "ä]]></description><link>https://blog.samiralibabic.com/from-kill-the-ums-to-voice-infrastructure</link><guid isPermaLink="true">https://blog.samiralibabic.com/from-kill-the-ums-to-voice-infrastructure</guid><category><![CDATA[AI]]></category><category><![CDATA[audio]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Tue, 14 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/from-kill-the-ums-to-voice-infrastructure/cover.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>From Kill the Ums to Voice Infrastructure</h1>
<p><em>A post-mortem on my filler-removal app</em></p>
<hr />
<h2>1. Context</h2>
<p>In early 2025 I built a web app that takes an audio recording, removes filler words like "um", "uh", and "äh", and outputs a cleaner version in the speaker’s own cloned voice.</p>
<p>Under the hood it used:</p>
<ul>
<li>A simple web frontend and backend (Bolt.new, Supabase, external APIs).</li>
<li>Stripe for payments.</li>
<li>A pay-as-you-go model via Stripe Checkout: users bought processing credits priced per minute of input audio, with no subscriptions.</li>
<li>ASR (Whisper or similar) to transcribe and timestamp the audio.</li>
<li>An LLM to remove fillers and light disfluencies from the transcript.</li>
<li>ElevenLabs fast voice cloning to re-synthesise the cleaned text in the original speaker’s voice.</li>
</ul>
<p>The goal was to offer podcasters and creators a simple way to sound more polished, without manual editing and without losing their authentic voice.</p>
<p>The app shipped. It technically worked. It got essentially zero traction.</p>
<p>This is a post-mortem of that project: what I thought would happen, what actually happened, and what I learned about AI audio, user needs, and distribution along the way.</p>
<hr />
<h2>2. Original Hypothesis</h2>
<h3>2.1 Problem Hypothesis</h3>
<p><strong>Belief</strong></p>
<blockquote>
<p>Podcasters and creators hate their own filler words and disfluencies. They would gladly pay for an automatic way to clean those up.</p>
</blockquote>
<p>More specifically:</p>
<ul>
<li>Filler words make people sound unprofessional.</li>
<li>Manual editing is tedious and time-consuming.</li>
<li>Existing tools do some cleanup, but there is room for a dedicated and smarter product.</li>
</ul>
<h3>2.2 Solution Hypothesis</h3>
<p><strong>Belief</strong></p>
<blockquote>
<p>A web app that removes fillers and re-voices the cleaned script in the user’s own cloned voice is a valuable and differentiated product.</p>
</blockquote>
<p>Key assumptions baked into that:</p>
<ol>
<li>Full re-synthesis (TTS plus cloning) is better than surgical waveform editing.</li>
<li>Podcasters are okay with an AI version of their voice as long as it sounds good.</li>
<li>“Fewer ums” is a strong enough value proposition on its own.</li>
<li>It is fine that the product lives outside their main editing tools.</li>
</ol>
<p>Most of these assumptions turned out to be either wrong or only partially true.</p>
<hr />
<h2>3. What I Actually Built</h2>
<h3>3.1 Product Shape</h3>
<p>The shipped version allowed users to:</p>
<ul>
<li>Upload audio or record in the browser.</li>
<li>Create a one-time voice clone via ElevenLabs samples.</li>
</ul>
<p>The pipeline looked like this:</p>
<ol>
<li>Transcribe audio with timestamps.</li>
<li>Use an LLM to clean the text, remove fillers, and fix some disfluencies.</li>
<li>Use ElevenLabs to re-synthesise the cleaned script in the cloned voice.</li>
</ol>
<p>The output was:</p>
<ul>
<li>A new, polished audio file.</li>
<li>Optional subtitles.</li>
</ul>
<p>Under the hood it used a Bolt.new frontend, Supabase for authentication, storage, and edge functions, Stripe Checkout for credits, and ElevenLabs plus Whisper for the heavy lifting.</p>
<h3>3.2 What Worked Technically</h3>
<ul>
<li>The pipeline itself worked and the before-and-after difference was clear.</li>
<li>Voice cloning was surprisingly good for short-form content.</li>
<li>The infrastructure choices (Supabase plus external AI APIs plus Stripe) were fast to build and easy to operate.</li>
</ul>
<p>I also learned a lot about:</p>
<ul>
<li>ASR → text → TTS chains.</li>
<li>Latency, batching, and cost trade-offs.</li>
<li>Voice cloning UX and consent considerations.</li>
</ul>
<p>On paper, it was a nice project. It just was not a good product.</p>
<h3>3.3 Timeline and Numbers</h3>
<p>I built the first version over a couple of days for a hackathon. After that, I kept it online for a few months as a live experiment.</p>
<p>During that time:</p>
<ul>
<li>Traffic was in the low hundreds of visitors in total.</li>
<li>Only a handful of people ever uploaded a file.</li>
<li>No one converted to paid minutes.</li>
</ul>
<p>I also shared it in a few niche communities and entered it into a hackathon. Nothing suggested there was a strong pull for this product in this particular shape.</p>
<p>From a numbers perspective, ClipClean never got beyond the stage of “toy used mostly by the builder”.</p>
<hr />
<h2>4. Why It Did Not Get Traction</h2>
<h3>4.1 The Problem Was Already “Good Enough” Solved</h3>
<p>Most podcasters who care about filler words are already using tools like Descript, Cleanvoice, Podcastle, Adobe Podcast and others.</p>
<p>These tools:</p>
<ul>
<li>Remove fillers in one click.</li>
<li>Handle silence trimming, noise reduction, compression and similar tasks.</li>
<li>Live directly in the editing workflow.</li>
</ul>
<p>For the average podcaster, the pain of “ums” is not large enough to justify:</p>
<ul>
<li>Cloning a voice.</li>
<li>Uploading episodes to a separate app.</li>
<li>Waiting for TTS re-synthesis.</li>
</ul>
<p><strong>Lesson:</strong> competing against “good enough” inside a user’s existing tool is much harder than it looks from the outside.</p>
<h3>4.2 The Solution Was Heavier Than the Pain</h3>
<p>The pipeline is objectively cool, but that was not enough.</p>
<p>Transcribe, clean text, re-voice in your own clone.</p>
<p>For most users this is a sledgehammer for a thumbtack:</p>
<ul>
<li>They do not need full re-synthesis to remove fillers.</li>
<li>They are not actively asking for an AI replica of their voice.</li>
<li>Many are happy with some imperfections, because authenticity matters more.</li>
</ul>
<p>The result was technically impressive and practically overkill.</p>
<p><strong>Lesson:</strong> just because something is technically possible and fun does not mean it maps to a strong enough user desire.</p>
<h3>4.3 Wrong Default Target: Generic Podcasters</h3>
<p>I implicitly optimised for a “generic podcaster” as the user:</p>
<ul>
<li>Hosts who want cleaner sounding shows.</li>
<li>People annoyed by their own speech patterns.</li>
</ul>
<p>In reality, this group splits into three subgroups:</p>
<ol>
<li><strong>Authenticity-first creators.</strong> They accept fillers as part of their style and removing everything actually feels wrong.</li>
<li><strong>Process-heavy shows.</strong> They already have an editor or a solid editing workflow and they are not hunting for a new tool for one specific step.</li>
<li><strong>Non-native speakers.</strong> This is the interesting niche, but the product and messaging were never focused squarely on them.</li>
</ol>
<p>The one group where the idea does resonate is:</p>
<blockquote>
<p>Non-native speakers who are self-conscious about sounding hesitant and would like more fluent-sounding audio in their own voice.</p>
</blockquote>
<p>I never leaned fully into that niche and the product itself was not clearly designed or branded as a “speak more fluently in your second language” tool.</p>
<p><strong>Lesson:</strong> a vague “for podcasters” target hides the one niche that might actually care enough.</p>
<h3>4.4 The Video Story Was Weak</h3>
<p>For audio-only content, timing is not a big problem.</p>
<p>For video, which is where many creators live, timing becomes much more important:</p>
<ul>
<li>Re-synthesised audio is shorter and differently paced.</li>
<li>Lip sync breaks for talking heads.</li>
<li>Fixing that properly means either editing the video to match the new audio, or generating a synthetic talking head that matches the TTS.</li>
</ul>
<p>Both of those options are essentially entirely different products.</p>
<p>So the app was implicitly audio-only in a world where much creator output is video-first.</p>
<p><strong>Lesson:</strong> full TTS re-synthesis and lip-visible video do not work well together unless you are willing to solve video as well.</p>
<h3>4.5 Distribution Reality and SEO Check</h3>
<p>There was also a distribution problem.</p>
<p>There is no real plugin marketplace for this kind of thing:</p>
<ul>
<li>Podcast hosts and directories do not expose audio-processing plugin stores.</li>
<li>Descript, Riverside and similar tools do not have public marketplaces for third-party audio effects.</li>
<li>Audio plugin marketplaces for VST or AU exist, but they are music-oriented and require a very different product shape.</li>
</ul>
<p>There was no natural place to list it and get organic discovery. The app would have had to win attention the hard way, by manually reaching creators, running content, or building a full distribution engine.</p>
<p>I also did a sanity check on SEO. When I looked at search volumes for obvious queries such as “remove filler words from audio” or “podcast um remover”, demand was essentially non-existent. The organic search channel for this exact value proposition was close to zero.</p>
<p>Given that:</p>
<ul>
<li>The product did not live inside existing tools.</li>
<li>There was no marketplace to piggyback on.</li>
<li>SEO demand for the “remove ums” idea was tiny.</li>
</ul>
<p>I did not invest heavily in further distribution, because the product–market fit signals were weak from the start.</p>
<p>I could have forced the issue with aggressive outreach, cold emails, paid ads, or content marketing. Given the weak pull, I decided not to spend weeks or months trying to brute-force adoption. The distribution story was not only hard. It also did not feel worth solving for this particular product.</p>
<p><strong>Lesson:</strong> there is no distribution cheat code here. Either the product is embedded where work already happens, or it has to earn its own audience from scratch. Ideally SEO should at least tell you that there is an audience.</p>
<h3>4.6 Even as an Asset, It Did Not Move</h3>
<p>As a last step, I tried to sell the project as an asset (code plus domain plus pipeline) on a marketplace such as Flippa.</p>
<ul>
<li>It was pre-revenue, with no traction and a narrow use case.</li>
<li>Interest was low and it did not sell.</li>
</ul>
<p>That is another data point. Even among buyers who like small AI tools, a filler-removal app with no traction and no clear wedge is a hard sell.</p>
<p><strong>Lesson:</strong> if both users and acquirers are lukewarm, the right move is to archive the project and keep only the parts that compound, such as skills, infrastructure patterns, and code.</p>
<hr />
<h2>5. What I Learned About AI Audio and Voice</h2>
<p>Beyond the specific product, the project was a good deep dive into the current state of AI audio.</p>
<h3>5.1 The Core Stack Is Mature</h3>
<ul>
<li>Whisper-class ASR models are strong enough to drive production pipelines.</li>
<li>Modern voice cloning, such as ElevenLabs, can produce convincing and on-brand voices with relatively little data.</li>
<li>LLMs are capable of decent disfluency removal and light rewriting on real and messy transcripts.</li>
</ul>
<p>This combination opens up many possibilities beyond “kill the ums”.</p>
<h3>5.2 UX, Consent, and Expectations Matter</h3>
<p>Working with cloned voices forces you to think about:</p>
<ul>
<li>How you obtain and store consent.</li>
<li>How you communicate what is synthetic versus original.</li>
<li>How much control users have over what is re-voiced and what stays real.</li>
</ul>
<p>Some people are very comfortable with an AI version of their voice. Others are uneasy once they hear it. That alone changes how you need to design the product, the onboarding, and the defaults.</p>
<p><strong>Lesson:</strong> with voice, the technical pipeline is only half the problem. The other half is comfort, expectations, and trust.</p>
<h3>5.3 Timing, Alignment, and Human Perception</h3>
<p>Small timing differences that do not matter for audio-only can be very noticeable on video.</p>
<p>I gained a much better intuition for:</p>
<ul>
<li>How tightly audio needs to track original pacing to feel natural.</li>
<li>Where listeners are sensitive versus where they do not care.</li>
<li>How important context (music, b-roll, or slides) is for hiding small desync.</li>
</ul>
<p>These instincts are reusable for any future voice or dubbing product.</p>
<hr />
<h2>6. Opportunity: Pivoting the Tech, Not the Product</h2>
<p>While this specific product did not land, the underlying machinery is reusable.</p>
<p>The stack I built – ASR → text → LLM → TTS with voice cloning and per-minute billing – can be a foundation for more promising directions, for example:</p>
<ol>
<li><p><strong>Multilingual dubbing</strong></p>
<ul>
<li>Take scripts or recordings and produce fluent voiceovers in multiple languages using the same voice identity.</li>
<li>Here full re-synthesis is a clear advantage, because you cannot just edit the original waveform.</li>
</ul>
</li>
<li><p><strong>Script-to-speech tools for founders and experts</strong></p>
<ul>
<li>Turn written content, such as blog posts, documentation, or newsletters, into narrated audio in the author’s voice.</li>
<li>This plays nicely with asynchronous content and does not fight existing editing workflows.</li>
</ul>
</li>
<li><p><strong>Internal "voice infrastructure" for future projects</strong></p>
<ul>
<li>Reuse the ASR → LLM → TTS chain as an internal component for other SaaS ideas.</li>
<li>Avoid solving auth, billing, file handling, and basic orchestration from scratch next time.</li>
</ul>
</li>
</ol>
<p>Each of these plays more to the strengths of the stack, without competing head-on with Descript’s filler-removal button.</p>
<p>The key shift is to focus on use cases where full re-synthesis is clearly an advantage, not an over-engineered way to solve a solvable editing problem.</p>
<hr />
<h2>7. Biggest Takeaways</h2>
<p>If I had to condense the whole experience into a few points:</p>
<ol>
<li><p><strong>Distribution and workflow integration matter more than clever pipelines.</strong></p>
<ul>
<li>If you are not inside the tools people already use, you need a very strong and distinct value proposition.</li>
</ul>
</li>
<li><p><strong>“Technically impressive” is not the same as “emotionally compelling”.</strong></p>
<ul>
<li>Getting rid of “um” and “ah” is nice. It is rarely the thing people are desperate to fix.</li>
</ul>
</li>
<li><p><strong>Niches beat generic personas.</strong></p>
<ul>
<li>“For podcasters” was too broad. “For non-native speakers who want to sound more fluent in English” would have been more honest and more interesting.</li>
</ul>
</li>
<li><p><strong>Post-mortems are assets, not obituaries.</strong></p>
<ul>
<li>The code, the infrastructure patterns, and the learnings about AI audio are a toolbox. The fact that this particular product did not take off does not make the work wasted.</li>
</ul>
</li>
</ol>
<hr />
<h2>8. Closing</h2>
<p>I did not get the outcome I imagined when I started this app. There is no neat MRR chart, no case studies from happy podcasters, no “we scaled to X users” story.</p>
<p>What I do have is:</p>
<ul>
<li>A tested, working pipeline for ASR → LLM → TTS with cloned voices.</li>
<li>A much clearer mental model of the audio and podcasting tool landscape.</li>
<li>A concrete reminder to validate distribution and workflow fit earlier.</li>
</ul>
<p>That is enough to call the project a useful experiment, and a good foundation for whatever voice-related thing comes next.</p>
]]></content:encoded></item><item><title><![CDATA[Why I Stopped Chasing POD Store Wins]]></title><description><![CDATA[Why I Stopped Chasing POD Store Wins
For a long time, print-on-demand was the business model I wanted to crack.
On paper, it looked perfect. No inventory. Low upfront risk. Huge product variety. Globa]]></description><link>https://blog.samiralibabic.com/why-i-stopped-chasing-pod-store-wins</link><guid isPermaLink="true">https://blog.samiralibabic.com/why-i-stopped-chasing-pod-store-wins</guid><category><![CDATA[ecommerce]]></category><category><![CDATA[print-on-demand]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Mon, 13 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/why-i-stopped-chasing-pod-store-wins/cover.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Why I Stopped Chasing POD Store Wins</h1>
<p>For a long time, print-on-demand was the business model I wanted to crack.</p>
<p>On paper, it looked perfect. No inventory. Low upfront risk. Huge product variety. Global fulfillment. A business you could start without a warehouse, a team, or much capital.</p>
<p>So I did what many people in this space do. I launched stores. I tested niches. I uploaded designs. I tweaked listings, changed suppliers, adjusted pricing, watched competitors, and tried to make the economics work.</p>
<p>I made sales.</p>
<p>But I never built a POD store business that felt truly solid.</p>
<p>The margins were thin. Attention was hard to get and even harder to keep. Small changes in ad costs, platform algorithms, supplier pricing, or competition could wipe out what little momentum I had. Everything felt fragile.</p>
<p>At some point, the question changed.</p>
<p>Instead of asking:</p>
<blockquote>
<p>How do I finally make this store work?</p>
</blockquote>
<p>I started asking:</p>
<blockquote>
<p>Where in this ecosystem does it actually make sense for me to stand?</p>
</blockquote>
<p>That question changed everything.</p>
<p>It led me away from trying to win as just another seller, and toward building infrastructure around the industry instead. In my case, that became <strong>PrintOnDemandBusiness.com</strong>, a directory and research platform for print-on-demand suppliers and tools.</p>
<p>Same person. Same industry. Very different results.</p>
<p>This post is my attempt to explain why.</p>
<h2>POD is not dead. But it is not what most people think.</h2>
<p>Let me start with the obvious.</p>
<p>Print-on-demand is not dead.</p>
<p>People still buy custom products. Sellers still make money. Suppliers still process huge order volumes. Platforms like Etsy, Amazon, Printful, Printify, and others are not imaginary businesses.</p>
<p>So the conclusion is not:</p>
<blockquote>
<p>There is no money in POD.</p>
</blockquote>
<p>The more accurate conclusion is this:</p>
<blockquote>
<p>There is money in POD, but it does not flow evenly.</p>
</blockquote>
<p>And that is where most people get confused.</p>
<p>A lot of beginners look at the category and think they are entering a neutral market where good design and hard work are enough.</p>
<p>That is not really what is happening.</p>
<p>POD is an ecosystem with layers. And depending on where you stand in that ecosystem, the odds can look completely different.</p>
<h2>The POD value chain</h2>
<p>Once I stopped looking at POD only through the lens of my own stores, the industry became much easier to understand.</p>
<p>There is a stack.</p>
<h3>1. The long tail of individual sellers</h3>
<p>This is where most people start.</p>
<p>A few designs on Etsy. Maybe a Shopify store. Maybe Amazon Merch if they get in. Common niches like pets, travel, memes, jobs, hobbies, retro aesthetics, funny quotes.</p>
<p>This layer is noisy and crowded.</p>
<p>Most sellers make little or nothing. A few make decent side income. A very small minority build something meaningful.</p>
<p>The problem is not that these people are lazy. The problem is structural.</p>
<p>They are trying to sell into markets where:</p>
<ul>
<li>barriers to entry are low,</li>
<li>competition is massive,</li>
<li>attention is expensive,</li>
<li>and products are easy to copy.</li>
</ul>
<p>If you do not already have distribution, or a very strong niche angle, you are fighting uphill from day one.</p>
<h3>2. Sellers with attention</h3>
<p>This is a very different category of seller.</p>
<p>These are creators, brands, niche communities, influencers, or businesses that already have an audience.</p>
<p>For them, POD is not primarily a discovery engine. It is a monetization layer.</p>
<p>They do not depend entirely on Etsy search, Amazon search, or random Meta ads to survive. They can bring their own demand.</p>
<p>That changes everything.</p>
<p>If you already own attention, POD can be a great business model. You get low operational complexity, wide product variety, and fast merchandising without holding stock.</p>
<p>That is why some people genuinely do well with it.</p>
<p>But that does not mean the average new seller will get similar results.</p>
<h3>3. Suppliers and fulfillment platforms</h3>
<p>Then you have the fulfillment layer.</p>
<p>Printful. Printify. Gelato. And many more.</p>
<p>These businesses aggregate small margins across many sellers and many orders. They win by sitting in the flow of commerce rather than betting on one store or one design.</p>
<p>They still face pressure, of course. Competition is strong. Sellers compare prices aggressively. Some suppliers disappear. Others grow.</p>
<p>But structurally, this is a better position than being one more store owner trying to win a niche battle with a handful of listings.</p>
<h3>4. Marketplaces</h3>
<p>Then there are the real bosses.</p>
<p>Amazon. Etsy. Redbubble. The platforms that control buyer intent and default consumer behavior.</p>
<p>They own the search box. They own the traffic. They own the habit.</p>
<p>They get paid whether your product wins or your competitor’s does.</p>
<p>This is why new marketplaces struggle so much. Starting from zero attention and trying to compete with established buyer behavior is brutal.</p>
<h3>5. Infrastructure around POD</h3>
<p>This is the layer I eventually moved into.</p>
<p>Directories. Comparison sites. Tools. Agencies. Workflow products. Research platforms. Service providers.</p>
<p>In other words, the businesses that help sellers and suppliers make better decisions and operate more efficiently.</p>
<p>This is where <strong>PrintOnDemandBusiness.com</strong> sits.</p>
<p>I am not competing with sellers on design.
I am not competing with suppliers on fulfillment.
I am helping sellers discover suppliers and tools, while helping suppliers get found by the right audience.</p>
<p>Instead of being one more booth inside an overcrowded market hall, I am building a map of the market hall.</p>
<p>That is a very different business.</p>
<h2>Attention is the real currency</h2>
<p>Once you look at the industry through this lens, one pattern becomes very obvious:</p>
<blockquote>
<p>Attention wins.</p>
</blockquote>
<p>Not in a motivational-poster way. In a mechanical way.</p>
<p>Look at who consistently captures value in POD:</p>
<ul>
<li>marketplaces that own buyer traffic,</li>
<li>suppliers that buy, borrow, or embed themselves into traffic flows,</li>
<li>sellers who already have audiences,</li>
<li>infrastructure businesses that become useful routing points inside the ecosystem.</li>
</ul>
<p>What do they all have in common?</p>
<p>They are closer to attention than the average small seller.</p>
<p>That is the real difference.</p>
<p>Most struggling POD stores do not fail because the owner forgot some secret hack. They fail because they are trying to compete in a system where visibility is the bottleneck, and they do not control that bottleneck.</p>
<p>That was true for me too.</p>
<p>My stores did make sales, but I was never happy with the economics. I was constantly aware that the real constraint was not “can I make another design?”</p>
<p>It was:</p>
<ul>
<li>can I get the right people to see it,</li>
<li>can I get them to care,</li>
<li>and can I do that repeatedly at acceptable margins?</li>
</ul>
<p>That is a much harder problem than most “start a POD business” content admits.</p>
<h2>What changed when I moved up the stack</h2>
<p>The biggest lesson from my own experience was not that POD stores never work.</p>
<p>It was that <strong>they were the wrong position for me</strong>.</p>
<p>My strengths are much closer to:</p>
<ul>
<li>structuring messy information,</li>
<li>building systems,</li>
<li>creating useful products,</li>
<li>and thinking about markets as ecosystems rather than just storefronts.</li>
</ul>
<p>Once I acted on that, things became clearer.</p>
<p>Instead of trying to squeeze better results out of thin-margin stores, I started cataloguing suppliers, tools, features, fees, regions, product categories, and integrations.</p>
<p>That turned into a structured directory.</p>
<p>That directory became <strong>PrintOnDemandBusiness.com</strong>.</p>
<p>And unlike my stores, PODB became profitable within months.</p>
<p>That result mattered to me because it isolated the variable.</p>
<p>Same industry. Same person. Same general knowledge base.</p>
<p>Different position in the value chain.</p>
<p>That told me something important:</p>
<blockquote>
<p>In POD, where you stand matters more than how hard you grind.</p>
</blockquote>
<h2>Is POD still worth it?</h2>
<p>Yes, for some people.</p>
<p>But not automatically, and not for the reasons most beginners hope.</p>
<p>I think POD still makes sense if one of the following is true:</p>
<h3>You already have attention</h3>
<p>If you have an audience, a brand, a niche community, a newsletter, a content engine, or some reliable way to put products in front of the right people, POD can be a strong monetization layer.</p>
<h3>You want to learn ecommerce with low inventory risk</h3>
<p>POD can still be a good learning vehicle. You can learn product pages, marketplaces, creative testing, customer behavior, supplier management, and basic operations without buying stock.</p>
<p>That has real value, even if your first store is not a great business.</p>
<h3>You genuinely enjoy the creative side</h3>
<p>Some people simply like making designs, experimenting with products, and selling to small communities. That is valid. Not every project has to be optimized for maximum leverage.</p>
<p>But if you are a founder looking for the best return on limited time, then the question gets sharper.</p>
<p>You have to ask:</p>
<blockquote>
<p>Is my best move really to become one more small seller in a crowded market?</p>
</blockquote>
<p>For me, the answer became no.</p>
<h2>What I would ask if I were starting today</h2>
<p>If I were starting from scratch in POD now, I would ask myself three questions early.</p>
<h3>1. Do I have, or can I realistically build, attention in a niche?</h3>
<p>Not vague hope. Real attention.</p>
<p>That could be SEO, content, short-form video, a community, an email list, paid traffic you can afford to learn with, or some other repeatable distribution channel.</p>
<p>If the answer is no, the path gets much harder.</p>
<h3>2. Am I okay treating this as a learning project rather than a guaranteed business?</h3>
<p>That framing matters.</p>
<p>If you treat POD as tuition for learning ecommerce, it can be very useful.
If you treat it as your main financial plan from day one, it can become frustrating very quickly.</p>
<h3>3. Given my actual strengths, is a POD store the right role for me?</h3>
<p>This is the most important one.</p>
<p>Some people should absolutely build stores.
Others should build tools.
Others should offer services.
Others should build media, directories, or infrastructure around the space.</p>
<p>A lot of wasted time comes from choosing the most visible role, not the one with the best fit.</p>
<h2>Where I landed</h2>
<p>I did not leave POD because I think it is fake.</p>
<p>I moved away from chasing store wins because I was not happy with the margins, the fragility, or the leverage.</p>
<p>What I found on the other side was a better fit.</p>
<p>By moving up the stack, I stopped trying to win one product at a time and started building something useful to the ecosystem itself.</p>
<p>That shift made far more sense for my skills, my goals, and my tolerance for operational grind.</p>
<p>So if you are in POD right now, or thinking about entering it, my advice is simple:</p>
<p>Zoom out.</p>
<p>Do not just ask whether POD works.
Ask where the money flows.
Ask who owns the attention.
Ask what role actually fits you.</p>
<p>Because the answer might not be “start another store.”</p>
<p>For me, it wasn’t.</p>
<p>And recognizing that was one of the more useful business lessons I have learned in a long time.</p>
]]></content:encoded></item><item><title><![CDATA[Composable Architecture Without Client-Heavy Bloat]]></title><description><![CDATA[Composable Architecture Without Client-Heavy Bloat
Why most small teams should start server-first, stay modular, and stop confusing code boundaries with network boundaries.
A lot of teams say they wan]]></description><link>https://blog.samiralibabic.com/composable-architecture-without-client-heavy-bloat</link><guid isPermaLink="true">https://blog.samiralibabic.com/composable-architecture-without-client-heavy-bloat</guid><category><![CDATA[engineering]]></category><category><![CDATA[architecture]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Fri, 10 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/composable-architecture/cover.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Composable Architecture Without Client-Heavy Bloat</h1>
<p><em>Why most small teams should start server-first, stay modular, and stop confusing code boundaries with network boundaries.</em></p>
<p>A lot of teams say they want a composable architecture.</p>
<p>What they often build instead is distributed coupling.</p>
<p>They split the app into a separate frontend and backend early, push everything through HTTP, duplicate validation and state handling on both sides, and call it flexibility. In reality, they did not remove coupling. They just moved it into JSON payloads, API contracts, and deployment coordination.</p>
<p>That is not composability. It is overhead.</p>
<p>For most products, especially those built by small teams, a better default is much simpler:</p>
<ul>
<li>keep the app server-first</li>
<li>keep the core logic modular</li>
<li>keep most communication in-process</li>
<li>extract separate services only when there is real operational pressure</li>
</ul>
<p>This gives you a system that is easier to ship, easier to reason about, and still flexible enough to grow a new UI or new interfaces later.</p>
<h2>The mistake: confusing code boundaries with deployment boundaries</h2>
<p>This is the root confusion.</p>
<p>People see a clean architecture diagram and assume the boxes must be separate processes talking over HTTP.</p>
<p>They do not.</p>
<p>A boundary in code does <strong>not</strong> automatically mean a boundary over the network.</p>
<p>Those are two different questions:</p>
<ol>
<li>Where should responsibilities be separated in the codebase?</li>
<li>Which parts actually need to be deployed and operated independently?</li>
</ol>
<p>Most teams answer question two far too early.</p>
<p>You can have a well-structured, modular app where the parts are cleanly separated in code but still run inside one deployable system. In fact, that is usually the better starting point.</p>
<h2>What server-first actually means</h2>
<p>Server-first does not mean shoving business logic into templates and calling it a day.</p>
<p>It does not mean building a giant tangled monolith where controllers, views, and database models all leak into each other.</p>
<p>It means:</p>
<ul>
<li>the server is allowed to assemble the page</li>
<li>the browser is not forced to become a second application runtime</li>
<li>the core business logic is not tied to any one delivery surface</li>
</ul>
<p>That last point matters most.</p>
<p>The goal is not “HTML everywhere forever.”</p>
<p>The goal is to keep the <strong>core</strong> of the app stable, while making it possible to attach different adapters around it.</p>
<p>Those adapters might be:</p>
<ul>
<li>server-rendered HTML</li>
<li>a JSON API</li>
<li>a background job</li>
<li>an admin interface</li>
<li>a CLI command</li>
<li>a future mobile client</li>
</ul>
<p>If those are all thin layers over the same application core, you have optionality without paying the full cost upfront.</p>
<h2>What “composable” should mean in practice</h2>
<p>When developers say “composable,” they often mean one of two things.</p>
<p>The first is infrastructure composability:</p>
<ul>
<li>swap the database</li>
<li>change the ORM</li>
<li>replace the payment provider</li>
<li>move search to a different backend</li>
</ul>
<p>The second is product composability:</p>
<ul>
<li>redesign the UI</li>
<li>add a different frontend later</li>
<li>expose an API</li>
<li>support a second interface without rewriting the app</li>
</ul>
<p>Both are valid goals.</p>
<p>But neither one requires you to split into “frontend” and “backend” as separate systems from the beginning.</p>
<p>What you actually need is a stable application core and clean seams around it.</p>
<h2>The better default: a modular monolith</h2>
<p>For most business apps, the right default is not a fat client and not a microservice fleet.</p>
<p>It is a modular monolith.</p>
<p>One deployable app. Clean internal modules. Clear boundaries. Thin adapters.</p>
<p>That means the system is structured around things like:</p>
<ul>
<li>domain and application logic</li>
<li>ports or interfaces for things that vary</li>
<li>concrete adapters for the web, database, payments, jobs, and so on</li>
</ul>
<p>Not around a premature split into two independently deployed apps.</p>
<p>A simple shape looks like this:</p>
<pre><code class="language-text">/domain
  listing.ts
  billing.ts
  permissions.ts

/application
  createListing.ts
  approveListing.ts
  chargePlan.ts

/ports
  ListingRepo.ts
  BillingGateway.ts
  SearchIndex.ts

/adapters
  /db
    ListingRepoPostgres.ts
  /web-html
    listingsPage.ts
  /web-api
    listingsApi.ts
  /jobs
    reindexListings.ts
</code></pre>
<p>The important part is not the folder names. The important part is the direction of dependency.</p>
<p>The application core should not care whether the request came from an HTML page, an API route, or a worker.</p>
<h2>A concrete example</h2>
<p>Imagine you are building a listings product.</p>
<p>You have:</p>
<ul>
<li>a public listings page</li>
<li>an internal admin view</li>
<li>a flow for creating a listing</li>
<li>billing rules that determine whether a user can post more listings</li>
<li>search filters and moderation states</li>
</ul>
<p>A lot of teams would jump straight to:</p>
<ul>
<li>React frontend</li>
<li>JSON API backend</li>
<li>maybe separate admin frontend too</li>
</ul>
<p>But if you are a small team, that usually creates more problems than it solves.</p>
<p>A cleaner shape is this:</p>
<ul>
<li><code>CreateListing</code> application service</li>
<li><code>ListingRepo</code> interface</li>
<li><code>BillingAccess</code> policy/service</li>
<li><code>HtmlListingsController</code></li>
<li>maybe later <code>ApiListingsController</code></li>
</ul>
<p>Then the flows look like this:</p>
<pre><code class="language-text">HtmlListingsController -&gt; CreateListing -&gt; ListingRepo -&gt; DB
ApiListingsController  -&gt; CreateListing -&gt; ListingRepo -&gt; DB
AdminJob/Worker        -&gt; CreateListing -&gt; ListingRepo -&gt; DB
</code></pre>
<p>All three use the same business capability.</p>
<p>That is real composability.</p>
<p>You can redesign the UI later. You can add an API later. You can migrate route by route. You can even remove the HTML controller for one area if a richer frontend becomes worth it.</p>
<p>The core stays intact.</p>
<h2>No, modules inside one app should not talk over HTTP</h2>
<p>This is where many people get lost.</p>
<p>If you have these parts inside the same application:</p>
<ul>
<li>HTML controller</li>
<li>API controller</li>
<li>application service</li>
<li>repository</li>
</ul>
<p>then they usually should <strong>not</strong> communicate via HTTP.</p>
<p>They should call each other directly in-process.</p>
<p>Like this:</p>
<pre><code class="language-text">HtmlController -&gt; AppService -&gt; Repo -&gt; DB
</code></pre>
<p>not like this:</p>
<pre><code class="language-text">HtmlController -(HTTP)-&gt; AppService -(HTTP)-&gt; Repo
</code></pre>
<p>HTTP is for process boundaries.</p>
<p>Inside one app, use direct calls.</p>
<p>This seems obvious once stated plainly, but a lot of teams blur the line between a conceptual boundary and a network boundary.</p>
<p>That confusion creates fake microservices inside one product.</p>
<p>The result is predictable:</p>
<ul>
<li>more latency</li>
<li>more failure modes</li>
<li>more serialization and contract maintenance</li>
<li>harder debugging</li>
<li>weaker transactions</li>
<li>slower shipping</li>
</ul>
<p>You bought distributed systems problems without earning the benefits.</p>
<h2>When should a module stay in-process?</h2>
<p>Most core business modules should stay in the modulith longer than people think.</p>
<p>Keep a module in-process when most of these are true:</p>
<ul>
<li>it participates in the same user request or transaction</li>
<li>it shares the core data model deeply</li>
<li>it does not need independent scaling</li>
<li>it is owned by the same team</li>
<li>there is no real second consumer yet</li>
<li>a network hop would mostly add friction</li>
</ul>
<p>This usually applies to things like:</p>
<ul>
<li>users</li>
<li>permissions</li>
<li>billing logic</li>
<li>listings or content entities</li>
<li>moderation</li>
<li>account state</li>
<li>admin workflows</li>
</ul>
<p>These are not great early service boundaries. They are usually core modules of the same application.</p>
<h2>What is a good candidate for extraction?</h2>
<p>Separate services make more sense when something is operationally distinct.</p>
<p>That usually means it is:</p>
<ul>
<li>async</li>
<li>bursty</li>
<li>resource-heavy</li>
<li>failure-prone</li>
<li>dependent on third-party systems</li>
<li>running on a different cadence from the main user-facing app</li>
</ul>
<p>Examples:</p>
<ul>
<li>search indexing</li>
<li>media or PDF processing</li>
<li>webhook consumers</li>
<li>crawler/scraper pipelines</li>
<li>AI enrichment jobs</li>
<li>analytics ingestion</li>
<li>async notifications</li>
</ul>
<p>These are much better candidates for workers or separate services.</p>
<p>The pattern here is simple:</p>
<p>If a module mostly answers <strong>what the business does</strong>, keep it in the modulith by default.</p>
<p>If it mostly answers <strong>how the system processes heavy or operationally distinct work</strong>, it is a better extraction candidate.</p>
<h2>You can still replace the frontend later</h2>
<p>This is the point people are usually worried about.</p>
<p>They want to avoid getting trapped in one presentation layer.</p>
<p>Fair concern. But the solution is not to split everything on day one.</p>
<p>The solution is to keep the presentation layer thin.</p>
<p>If your server-rendered HTML controller is only an adapter that calls application services, you can later:</p>
<ul>
<li>replace those routes with an API-driven frontend</li>
<li>run old HTML and new UI side by side during migration</li>
<li>keep some parts server-rendered and make only a complex area more interactive</li>
</ul>
<p>That is often the best real-world setup.</p>
<p>For example:</p>
<ul>
<li>marketing pages: server-rendered</li>
<li>account and settings: server-rendered</li>
<li>complex dashboard or editor: richer client-side frontend</li>
</ul>
<p>Not every page needs the same architecture.</p>
<p>That is another reason to resist defaulting to an all-client app too early.</p>
<h2>Why teams over-split</h2>
<p>Teams usually do not split early because they are stupid. They split early because the story sounds reasonable.</p>
<p>“We want flexibility.”</p>
<p>“We might need mobile later.”</p>
<p>“We do not want a monolith.”</p>
<p>“We want the frontend to be replaceable.”</p>
<p>All of that sounds smart. The problem is that these are often hypothetical benefits traded for immediate, certain costs.</p>
<p>Those costs are real:</p>
<ul>
<li>duplicated validation and types</li>
<li>more auth and session complexity</li>
<li>more code to coordinate</li>
<li>slower local development</li>
<li>harder end-to-end reasoning</li>
<li>more release friction</li>
<li>more bugs at boundaries</li>
</ul>
<p>You should not pay those costs based on imagined future consumers.</p>
<p>Build the second consumer when it becomes real.</p>
<p>Until then, keep the architecture honest.</p>
<h2>A good migration path</h2>
<p>The right progression for most products looks like this:</p>
<h3>1. Start with a modular monolith</h3>
<p>One app. Clear boundaries. Server-first by default. Thin controllers.</p>
<h3>2. Add API adapters where there is a real need</h3>
<p>Maybe a richer frontend is justified for one area. Fine. Add the API adapter there.</p>
<h3>3. Extract operational subsystems first</h3>
<p>Workers, indexing, AI jobs, media processing, webhook handling.</p>
<h3>4. Extract business-domain services only when real pressure proves it</h3>
<p>That pressure might be:</p>
<ul>
<li>independent ownership</li>
<li>separate scaling needs</li>
<li>narrow stable contracts</li>
<li>truly different deployment cadence</li>
</ul>
<p>Until then, keep the domain close together.</p>
<h2>The rule I would use</h2>
<p>If you are unsure, default to this:</p>
<p><strong>Keep business logic and core workflows in one modular, server-first app. Extract only the parts that are operationally distinct, async, or heavy.</strong></p>
<p>That rule will save most small teams from a lot of self-inflicted architecture pain.</p>
<h2>Closing thought</h2>
<p>A composable architecture is not one where everything talks over HTTP.</p>
<p>It is one where the core of the app remains stable while the surfaces and infrastructure around it can change.</p>
<p>That means:</p>
<ul>
<li>clean code boundaries</li>
<li>thin adapters</li>
<li>server-first by default</li>
<li>minimal client-side complexity unless it clearly pays for itself</li>
<li>service boundaries introduced for real reasons, not architectural fashion</li>
</ul>
<p>That is the boring answer.</p>
<p>It is also the one that usually works.</p>
]]></content:encoded></item><item><title><![CDATA[What I Learned Building Print2Social: Social Automation Is Not Enough for Store Traffic]]></title><description><![CDATA[What I Learned Building Print2Social: Social Automation Is Not Enough for Store Traffic
First published July 10, 2025. Updated December 2025 with the full story and current status of Print2Social.
Over the last year I have been building Print2Social,...]]></description><link>https://blog.samiralibabic.com/what-i-learned-building-print2social</link><guid isPermaLink="true">https://blog.samiralibabic.com/what-i-learned-building-print2social</guid><category><![CDATA[marketing]]></category><category><![CDATA[SaaS]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Thu, 09 Apr 2026 15:48:28 GMT</pubDate><enclosure url="https://samiralibabic.com/images/founders-notes/what-i-learned-building-print2social/cover.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-i-learned-building-print2social-social-automation-is-not-enough-for-store-traffic">What I Learned Building Print2Social: Social Automation Is Not Enough for Store Traffic</h1>
<p><em>First published July 10, 2025. Updated December 2025 with the full story and current status of Print2Social.</em></p>
<p>Over the last year I have been building <strong>Print2Social</strong>, a tool designed to automate social media content for print on demand (POD) and indie e commerce store owners.</p>
<p>The original goal was simple:</p>
<ul>
<li>connect your Printful account and your social profiles</li>
<li>auto generate product content</li>
<li>keep your feeds active without burning your time</li>
</ul>
<p>I wanted to help shop owners like myself spend less time on content creation and more time building the business, with the hope that automated posts on Instagram, Facebook, TikTok and later Pinterest would grow our brands and bring real traffic to our stores.</p>
<p>After months of real world testing, a small public launch, and a few more experiments, here is what I learned and where Print2Social stands today.</p>
<hr />
<h2 id="heading-the-reality-social-automation-brings-views-not-store-traffic">The reality: social automation brings views, not store traffic</h2>
<p>In the first version of Print2Social I connected several of my own stores to Facebook, Instagram and TikTok and let it auto post hundreds of pieces of content.</p>
<p>The tool did what it promised. It created:</p>
<ul>
<li>lifestyle images and product highlights for feeds</li>
<li>short form product clips for TikTok</li>
<li>regular posts that kept every account “active”</li>
</ul>
<p>Some TikTok videos reached more than 700 views in the early tests. Later, as I expanded to four stores, one typical 7 day window looked like this across their TikTok accounts:</p>
<ul>
<li>roughly 18 000 video views in total</li>
<li>more than 120 profile visits</li>
<li>a handful of likes</li>
<li>9 new followers</li>
</ul>
<p>On Meta the picture was even clearer. Across four Facebook Pages and four Instagram accounts, a typical week with more than 100 automated posts produced:</p>
<ul>
<li>reach in the low double digits on Facebook</li>
<li>under one thousand people reached on Instagram</li>
<li>0 new followers</li>
<li>0 new contacts or messages</li>
</ul>
<p>Pinterest was the most honest of all. For a while one account showed 26 impressions in a month. The others stayed at zero.</p>
<p>The conclusion is simple:</p>
<blockquote>
<p>Social automation can keep your feeds busy, but it does not magically create distribution, followers or sales.</p>
</blockquote>
<p>Social media platforms are designed to keep users inside the app. They reward content that drives watch time, replies, saves and shares. Static product content from tiny accounts rarely does that, whether it is posted manually or by a bot.</p>
<p>Even if you gain some views, very few people will click out to your website, and even fewer will buy.</p>
<p>For indie store owners, automated posting alone does not drive targeted, purchase ready traffic.</p>
<hr />
<h2 id="heading-the-first-reaction-maybe-the-posts-just-need-to-be-shoppable">The first reaction: maybe the posts just need to be shoppable</h2>
<p>When I wrote the first version of this article in July 2025, my thinking went like this:</p>
<ul>
<li>If people do not like to leave the app, bring the shop to them.</li>
<li>Meta Shops, TikTok Shop and Pinterest Shopping are built for in app purchases.</li>
<li>Instead of sending people to external product pages, turn every post into a shoppable surface.</li>
</ul>
<p>The plan at the time was to pivot Print2Social from “auto posting” to a <strong>shoppable social commerce tool</strong>.</p>
<p>On paper that meant:</p>
<ul>
<li>syncing product catalogs from Shopify, WooCommerce, Printful, Printify</li>
<li>auto generating media with correct product tags</li>
<li>publishing to Meta Shops, TikTok Shop and Pinterest Shopping</li>
<li>so that a customer could buy without ever leaving the app</li>
</ul>
<p>This direction still makes sense in theory. It is a real problem for many brands.</p>
<p>In practice I did not fully execute that pivot.</p>
<p>Before committing to a big new roadmap, I decided to run one more experiment.</p>
<hr />
<h2 id="heading-the-14-day-experiment-mini-launch-and-more-data">The 14 day experiment: mini launch and more data</h2>
<p>I set up a very simple two week test for Print2Social.</p>
<p><strong>On the product side:</strong></p>
<ul>
<li>keep four POD stores posting to Facebook, Instagram, TikTok and Pinterest using Print2Social</li>
<li>only fix critical bugs, no new features, no fancy templates</li>
</ul>
<p><strong>On the marketing side:</strong></p>
<ul>
<li>put a banner for Print2Social on my main site, PrintOnDemandBusiness.com</li>
<li>send a short email to my list</li>
<li>cross post about the tool on my social channels</li>
<li>list it on Product Hunt and resubmit to BetaList</li>
</ul>
<p>Then I stepped back and looked at the numbers from Google Analytics, Stripe and the social platforms.</p>
<h3 id="heading-what-happened-on-the-site-and-in-stripe">What happened on the site and in Stripe</h3>
<p>Over the 14 day window and surrounding weeks:</p>
<ul>
<li>Google Analytics showed only a few dozen unique visitors to Print2Social</li>
<li>there was no noticeable spike around the launch date</li>
<li>sources were a mix of direct, Google, a couple of referrals and a bit from my own sites</li>
</ul>
<p>Stripe told a similar story:</p>
<ul>
<li>monthly recurring revenue hovered around a single digit euro amount</li>
<li>net volume was low double digits</li>
<li>the only paying users were a couple of early adopters from before this experiment</li>
<li>no new customers appeared from the launch channels</li>
</ul>
<p>As a standalone SaaS, Print2Social did not find traction.</p>
<h3 id="heading-what-happened-on-the-stores">What happened on the stores</h3>
<p>On the social side I have already shared the high level numbers.</p>
<p>The short version is:</p>
<ul>
<li>hundreds of automated posts per week</li>
<li>a steady trickle of low intent TikTok views</li>
<li>almost no new followers or meaningful engagement</li>
<li>no visible lift in store traffic or sales that I could attribute to the tool</li>
</ul>
<p>At that point I had enough information to make a decision.</p>
<hr />
<h2 id="heading-print2social-today-a-useful-internal-tool-not-a-product">Print2Social today: a useful internal tool, not a product</h2>
<p>Print2Social does what it was designed to do.</p>
<ul>
<li>It connects to Printful and my social accounts.</li>
<li>It turns products into posts.</li>
<li>It saves me time and removes boring manual work.</li>
</ul>
<p>For that reason I am not shutting it down.</p>
<p>Instead I am changing how I think about it.</p>
<ul>
<li>Print2Social now lives on as <strong>an internal utility</strong> for my own POD test stores.</li>
<li>I am not actively developing new features for it.</li>
<li>I am not marketing it, pitching it, or trying to grow it as a SaaS.</li>
<li>If someone finds it and really wants access, I can still let them in, but it is no longer a focus.</li>
</ul>
<p>My POD stores themselves are treated as testbeds only. At some point in the future I will likely consolidate everything to a simple Etsy presence or redirect the domains to PrintOnDemandBusiness.com.</p>
<p>The experiment did its job. It answered the question.</p>
<hr />
<h2 id="heading-what-i-would-tell-other-founders">What I would tell other founders</h2>
<p>I am not writing this to say “never build social tools” or “never start a consumer brand”. Those things can work under the right conditions.</p>
<p>This is what I would highlight after going through this:</p>
<h3 id="heading-1-do-not-mistake-posts-for-distribution">1. Do not mistake posts for distribution</h3>
<p>It is easy to think that the problem is speed.</p>
<p>“If I could post more often, the algorithm would eventually notice me.”</p>
<p>What I saw in practice is closer to the opposite:</p>
<ul>
<li>more product posts from tiny accounts mostly produced more quiet</li>
<li>even thousands of current TikTok views did not turn into traffic</li>
</ul>
<p>If the base content format does not give the platform a reason to push you, automating that format just makes you invisible slightly faster.</p>
<h3 id="heading-2-automation-is-best-used-on-top-of-working-levers">2. Automation is best used on top of working levers</h3>
<p>Print2Social optimised something that was never the main bottleneck.</p>
<p>Saving time is real value, but it is not the same as creating demand.</p>
<p>In hindsight, the more interesting automation problems sit closer to revenue:</p>
<ul>
<li>helping people who already have traffic earn more per visitor</li>
<li>helping people who already close deals close them faster</li>
</ul>
<p>The next time I build automation for others, I want it to sit under levers that are already proven, not under hope.</p>
<h3 id="heading-3-demand-should-be-visible-before-you-invest-heavily">3. Demand should be visible before you invest heavily</h3>
<p>Print2Social started as an internal tool for my own annoyance.</p>
<p>That part was fine. It is still useful for me.</p>
<p>Where I overreached was treating that initial usefulness as proof that there was a commercial product here.</p>
<p>Before turning future tools into products I want to see:</p>
<ul>
<li>clear search demand</li>
<li>or buyers already paying for a worse alternative</li>
<li>or a channel where similar products clearly get signups and usage</li>
</ul>
<p>Internal annoyances are a great source of ideas. They are not enough on their own.</p>
<h3 id="heading-4-run-finite-experiments-and-believe-the-results">4. Run finite experiments and believe the results</h3>
<p>The hardest part is not starting experiments. It is stopping them.</p>
<p>My 14 day launch window for Print2Social was intentionally small. It let me test a few channels, watch the numbers and then make a call.</p>
<p>When the data came back weak, the right move was not “one more feature” or “one more campaign”. It was to accept that the idea, in this form, does not have the pull I hoped for.</p>
<p>That is not a dramatic failure. It is just a closed loop.</p>
<hr />
<h2 id="heading-closing">Closing</h2>
<p>I went into Print2Social hoping that social automation would be an engine for store traffic. What I actually built is a solid little tool that takes boring work off my plate but does not change the fundamentals of distribution.</p>
<p>That is still a win for me personally, and it might become a useful component inside something bigger one day.</p>
<p>For now, the lesson is simple.</p>
<p>Build tools that save you time. Just do not confuse “I can post more” with “more people will care”.</p>
<p>And when the data says a project belongs in the internal toolbox, let it live there and move your focus to where demand is already knocking.</p>
]]></content:encoded></item><item><title><![CDATA[How to Own the Future: Low Capital Entry Points to Long Term Value Creation]]></title><description><![CDATA[The World Is Changing. Are You Ready?
We are living through a shift as significant as the Industrial Revolution. Automation, AI and platforms are steadily reducing the need for human labor in almost every sector. Factories, logistics, customer suppor...]]></description><link>https://blog.samiralibabic.com/how-to-own-the-future-low-capital-entry-points-to-long-term-value-creation</link><guid isPermaLink="true">https://blog.samiralibabic.com/how-to-own-the-future-low-capital-entry-points-to-long-term-value-creation</guid><category><![CDATA[Future]]></category><category><![CDATA[Economy]]></category><category><![CDATA[assets]]></category><category><![CDATA[digital assets]]></category><category><![CDATA[ownership]]></category><category><![CDATA[startup]]></category><category><![CDATA[Founder]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Fri, 05 Dec 2025 08:00:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/JKUTrJ4vK00/upload/4b600f253be2dbafd5c63ee1f58525fa.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-world-is-changing-are-you-ready">The World Is Changing. Are You Ready?</h2>
<p>We are living through a shift as significant as the Industrial Revolution. Automation, AI and platforms are steadily reducing the need for human labor in almost every sector. Factories, logistics, customer support, copywriting, even coding itself can now be done by machines.</p>
<p>If machines do more and more of the work, a critical question appears for anyone who wants to stay relevant and prosperous:</p>
<p><strong>How can you own a piece of tomorrow's value creation, even if you have limited capital today?</strong></p>
<p>Most advice still focuses on labor: get a better job, learn a new skill, hustle harder. But as value creation shifts away from human labor toward <strong>ownership of productive assets</strong> (digital, data or distribution), it becomes more important to position yourself as an <strong>owner</strong>, not only as a worker.</p>
<p>This article lays out <strong>five practical, low capital entry points</strong> that help you own assets which matter in a high automation future. You do not need a big team or investors. You need a clear niche, a bit of patience and the mindset of an owner.</p>
<hr />
<h2 id="heading-1-build-a-high-trust-micro-directory">1. Build a High Trust Micro Directory</h2>
<p><strong>Why it matters</strong></p>
<p>Owning structured and trusted data is a form of digital land ownership. In almost every niche, discovery and curation are still unsolved problems. If you control a useful directory for tools, suppliers, consultants or products, you become a gatekeeper and a distribution hub.</p>
<p>You are not just listing links. You are deciding what gets visibility and how people discover options. That is real power.</p>
<p><strong>How to start</strong></p>
<ul>
<li><p>Pick a niche that is fragmented or confusing. Examples: AI tools for lawyers, eco friendly packaging suppliers, telehealth platforms, local renewable installers.</p>
</li>
<li><p>Use open data, scraping or simple manual research to seed the directory.</p>
</li>
<li><p>Enrich each listing with extra metadata. For example: who it is for, typical price range, strengths and weaknesses, main use cases.</p>
</li>
<li><p>Add simple filters, tags and search. Focus on clarity and trust, not fancy design.</p>
</li>
</ul>
<p><strong>How it makes money</strong></p>
<ul>
<li><p>Sponsored placements and featured listings.</p>
</li>
<li><p>Paid “highlight” spots for new or premium entries.</p>
</li>
<li><p>Affiliate links to tools and platforms.</p>
</li>
<li><p>Access to the raw dataset as a paid download or API.</p>
</li>
</ul>
<p><strong>Cost</strong></p>
<p>Time plus basic hosting. Think in the range of 10 to 20 euros per month. The main investment is your attention and judgment.</p>
<hr />
<h2 id="heading-2-run-a-niche-newsletter-with-a-companion-database">2. Run a Niche Newsletter With a Companion Database</h2>
<p><strong>Why it matters</strong></p>
<p>Email lists are one of the most defensible digital assets. A high quality newsletter gives you both attention and distribution. It is a direct line to people who already trust you.</p>
<p>If you pair the newsletter with a public or semi public database, you end up owning both:</p>
<ul>
<li><p>The narrative about what matters in the niche.</p>
</li>
<li><p>The structured knowledge base people rely on.</p>
</li>
</ul>
<p><strong>How to start</strong></p>
<ul>
<li><p>Find a niche where people feel overwhelmed by information. For example: “Which AI tools actually save time for solo founders” or “What really works in print on demand right now”.</p>
</li>
<li><p>Curate and interpret news, tool launches and real world experiments. Use AI to draft summaries, but always add your own judgment.</p>
</li>
<li><p>Publish consistently. Weekly or biweekly is enough if the signal is high.</p>
</li>
<li><p>Build a simple companion database in Notion or on your site that collects the most important links, tools and resources from the newsletter.</p>
</li>
</ul>
<p><strong>How it makes money</strong></p>
<ul>
<li><p>Sponsorships and promoted sections in the newsletter.</p>
</li>
<li><p>Paid access to the full database or extra filters.</p>
</li>
<li><p>Community memberships or mastermind groups that sit on top of the newsletter.</p>
</li>
</ul>
<p><strong>Cost</strong></p>
<p>Most email tools are free or cheap for small lists. The main cost is your time and your willingness to be the filter for others.</p>
<hr />
<h2 id="heading-3-extract-and-sell-unique-datasets">3. Extract And Sell Unique Datasets</h2>
<p><strong>Why it matters</strong></p>
<p>In an AI driven economy, clean and structured data is often more valuable than code. Businesses, consultants and model builders all need high quality datasets for training, analysis and outreach.</p>
<p>If you can turn messy public data into a structured dataset that others can use, you own an asset that can be sold again and again.</p>
<p><strong>How to start</strong></p>
<ul>
<li><p>Pick a space where information is public but not organized. Examples: Shopify apps, Etsy shops in a niche, local specialists, conference speakers, podcast guests, SaaS tools for a specific industry.</p>
</li>
<li><p>Scrape or collect the raw data from websites, directories and public sources.</p>
</li>
<li><p>Clean the data and normalize fields like names, URLs, categories, location and pricing.</p>
</li>
<li><p>Enrich the dataset with extra fields using AI or manual research. For example: tags, estimated revenue tier, tech stack, target segment.</p>
</li>
</ul>
<p><strong>How it makes money</strong></p>
<ul>
<li><p>One off CSV sales on your own site or marketplaces.</p>
</li>
<li><p>Annual or quarterly update subscriptions.</p>
</li>
<li><p>Licensing deals with agencies or startups.</p>
</li>
<li><p>Bundling the data into a micro SaaS or report.</p>
</li>
</ul>
<p><strong>Cost</strong></p>
<p>Mainly your time and some basic scraping infrastructure. You do not need heavy servers. Start small and test demand before scaling.</p>
<hr />
<h2 id="heading-4-turn-a-workflow-into-a-micro-api-or-tiny-tool">4. Turn a Workflow Into a Micro API Or Tiny Tool</h2>
<p><strong>Why it matters</strong></p>
<p>There are endless small workflows people repeat every day. Renaming files, generating product descriptions, adding UTM parameters, resizing images, cleaning audio, checking URLs.</p>
<p>If you can capture one of these workflows and automate it behind a simple interface, you can charge for access instead of selling your time.</p>
<p><strong>How to start</strong></p>
<ul>
<li><p>Watch what you or your peers do over and over inside a browser or editor.</p>
</li>
<li><p>Ask: “Can this be turned into a single button or API call?”</p>
</li>
<li><p>Build a tiny web app, browser extension or API that does exactly that one thing.</p>
</li>
<li><p>Add clear documentation and a simple pricing model, even if it is just pay what you want or a few euros per month.</p>
</li>
</ul>
<p><strong>How it makes money</strong></p>
<ul>
<li><p>Subscription fees for unlimited use.</p>
</li>
<li><p>Pay per use credits via an API.</p>
</li>
<li><p>White label deals with agencies who want to offer the feature under their own brand.</p>
</li>
</ul>
<p><strong>Cost</strong></p>
<p>Your development time plus very basic hosting. Many of these tools can run on extremely cheap infrastructure because they only do one thing.</p>
<hr />
<h2 id="heading-5-own-the-prompt-and-output-layer-for-a-gpt-vertical">5. Own The Prompt And Output Layer For A GPT Vertical</h2>
<p><strong>Why it matters</strong></p>
<p>As large language models become a commodity, the value does not sit in the raw model alone. It sits in the <strong>workflow</strong> that turns a vague request into a reliable result.</p>
<p>If you define the best practice prompts and output templates for a specific vertical, you become the default interface between people and AI in that area.</p>
<p><strong>How to start</strong></p>
<ul>
<li><p>Pick a high value workflow where people already try to use AI, but often get poor results. For example: legal letters, onboarding email sequences, product requirement documents, podcast show notes, outreach emails.</p>
</li>
<li><p>Experiment heavily. Find prompt structures and checklists that consistently give good results.</p>
</li>
<li><p>Turn these into reusable templates and step by step flows.</p>
</li>
<li><p>Package the result as prompt libraries, downloadable guides, a small web app or a paid Notion system.</p>
</li>
</ul>
<p><strong>How it makes money</strong></p>
<ul>
<li><p>Selling prompt packs or template bundles.</p>
</li>
<li><p>Charging for access to a hosted version that hides the prompts behind a form.</p>
</li>
<li><p>Adding consulting or done for you services on top for higher paying clients.</p>
</li>
</ul>
<p><strong>Cost</strong></p>
<p>Very low. You mainly invest time and some API credits to test prompts.</p>
<hr />
<h2 id="heading-summary-start-owning-small-compounding-assets">Summary: Start Owning Small, Compounding Assets</h2>
<p>All of these entry points share the same pattern.</p>
<p>You are moving from:</p>
<ul>
<li>Selling time and isolated deliverables.</li>
</ul>
<p>To:</p>
<ul>
<li>Building assets that can be used and paid for repeatedly.</li>
</ul>
<p>You do not need a big budget to begin. You can start with:</p>
<ul>
<li><p>A micro directory in a niche you understand.</p>
</li>
<li><p>A newsletter that curates and explains a noisy space.</p>
</li>
<li><p>A single dataset that solves a real research problem.</p>
</li>
<li><p>One tiny tool that removes a daily annoyance.</p>
</li>
<li><p>A set of prompts and templates that change how a niche uses AI.</p>
</li>
</ul>
<p>Over time these assets can compound. They can feed each other. A directory can power a dataset sale. A newsletter can grow the user base of a tiny tool. A prompt library can evolve into a full vertical SaaS.</p>
<p>The common thread is simple.</p>
<p>In a world where machines create more and more of the output, you want to own at least a small part of the systems, data and channels that direct that output.</p>
<p>You do not control the entire future economy. But you can control a useful corner of it.</p>
<p>Start small. Start now. And build things that keep working even when you are not.</p>
]]></content:encoded></item><item><title><![CDATA[Is Startup Success Deterministic? Why Founders Should Think in Probabilities]]></title><description><![CDATA[Founders love playbooks.
“Follow these 7 steps.”“Use this funnel.”“Copy this cold email script.”
The implication is always the same: if you do what successful founders did, you’ll get what they got. Same inputs, same outputs.
That’s a deterministic v...]]></description><link>https://blog.samiralibabic.com/is-startup-success-deterministic-why-founders-should-think-in-probabilities</link><guid isPermaLink="true">https://blog.samiralibabic.com/is-startup-success-deterministic-why-founders-should-think-in-probabilities</guid><category><![CDATA[Founders]]></category><category><![CDATA[business]]></category><category><![CDATA[startup]]></category><category><![CDATA[lean startup]]></category><category><![CDATA[experimentation]]></category><category><![CDATA[Resilience]]></category><category><![CDATA[determinism]]></category><category><![CDATA[Playbooks]]></category><category><![CDATA[#unicorn]]></category><category><![CDATA[Entrepreneurship]]></category><category><![CDATA[Entrepreneur]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Thu, 04 Dec 2025 19:00:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/uPK2TbJlvMQ/upload/e7312494e48017cabcb2073f60eacd9a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Founders love playbooks.</p>
<p>“Follow these 7 steps.”<br />“Use this funnel.”<br />“Copy this cold email script.”</p>
<p>The implication is always the same: if you do what successful founders did, you’ll get what they got. Same inputs, same outputs.</p>
<p>That’s a <strong>deterministic</strong> view of entrepreneurship.</p>
<p>In reality, startups don’t behave like chemistry experiments. You can do almost everything "right" and still fail. You can also do a lot of things "wrong" and still stumble into something that works.</p>
<p>This doesn’t mean everything is random. It means <strong>success is not guaranteed, only more or less likely</strong>. The rational founder doesn’t look for certainty. They manage probabilities.</p>
<hr />
<h2 id="heading-determinism-vs-reality">Determinism vs. Reality</h2>
<p>In a deterministic system, the rules are simple:</p>
<blockquote>
<p>Same starting conditions → same result, every time.</p>
</blockquote>
<p>Physics experiments. Math equations. Compilers.</p>
<p>If entrepreneurship worked this way, you could plug in:</p>
<ul>
<li><p>X hours of hard work</p>
</li>
<li><p>Y amount of funding</p>
</li>
<li><p>Z-size market</p>
</li>
</ul>
<p>…and reliably output a profitable company.</p>
<p>But we know it doesn’t work like that. Two founders can:</p>
<ul>
<li><p>build similar products,</p>
</li>
<li><p>in similar markets,</p>
</li>
<li><p>with similar effort…</p>
</li>
</ul>
<p>…and get wildly different outcomes.</p>
<p>So instead of asking, <em>“What guarantees success?”</em> a better question is:</p>
<blockquote>
<p><strong>“What increases or decreases the odds?”</strong></p>
</blockquote>
<p>The rest of this article is about separating what you can strongly influence from what you can only partially or not at all control – and how to operate rationally inside that uncertainty.</p>
<hr />
<h2 id="heading-1-what-you-can-actually-influence">1. What You Can Actually Influence</h2>
<p>You don’t control outcomes, but you do control your <strong>inputs</strong> and <strong>behavior</strong>. Some levers matter much more than others.</p>
<h3 id="heading-a-value-proposition-amp-productmarket-fit">a) Value Proposition &amp; Product–Market Fit</h3>
<p>You can’t force a market to want your product. But you <em>can</em> decide whether you:</p>
<ul>
<li><p>talk to customers or just build in isolation,</p>
</li>
<li><p>ship small and iterate or disappear for 12 months,</p>
</li>
<li><p>listen to painful feedback or defend your original idea.</p>
</li>
</ul>
<p>Concrete behaviors that improve the odds:</p>
<ul>
<li><p>Run structured customer interviews instead of guessing needs.</p>
</li>
<li><p>Test positioning and messaging before writing a single line of code.</p>
</li>
<li><p>Ship a minimum viable version and watch what people <strong>do</strong>, not just what they <strong>say</strong>.</p>
</li>
</ul>
<p>You still don’t get a guarantee. You just get <strong>much better signal</strong> much earlier.</p>
<h3 id="heading-b-business-model-amp-strategy">b) Business Model &amp; Strategy</h3>
<p>A surprising number of startups fail not because the product is terrible, but because the <strong>business model</strong> is.</p>
<p>You <em>do</em> control:</p>
<ul>
<li><p>how you charge (subscription, usage-based, one-off, hybrid),</p>
</li>
<li><p>who you charge (SMB vs enterprise, prosumers vs hobbyists),</p>
</li>
<li><p>how you reach them (SEO, outbound, paid, partnerships, marketplaces).</p>
</li>
</ul>
<p>Treat these as variables, not destiny. Small changes here can radically change unit economics and survival chances.</p>
<h3 id="heading-c-team-amp-culture">c) Team &amp; Culture</h3>
<p>You don’t get to choose macroeconomic conditions. You <em>do</em> choose:</p>
<ul>
<li><p>whether you hire slowly or impulsively,</p>
</li>
<li><p>whether you tolerate low standards,</p>
</li>
<li><p>whether you reward learning or blame.</p>
</li>
</ul>
<p>Teams with a culture of ownership and honesty can course-correct faster. Teams built on ego and politics usually don’t.</p>
<h3 id="heading-d-execution-amp-adaptability">d) Execution &amp; Adaptability</h3>
<p>You don’t control competitor moves. You <em>do</em> control how fast you respond.</p>
<p>Questions that matter:</p>
<ul>
<li><p>Do you ship weekly or yearly?</p>
</li>
<li><p>Do you review metrics and feedback regularly or fly blind?</p>
</li>
<li><p>Do you pivot when evidence is clear, or cling to sunk costs?</p>
</li>
</ul>
<p>Good execution doesn’t guarantee success. But weak execution almost guarantees <strong>you won’t even find out</strong> whether the idea could have worked.</p>
<hr />
<h2 id="heading-2-what-you-dont-control-but-must-respect">2. What You Don’t Control (But Must Respect)</h2>
<p>This is the part most playbooks gloss over.</p>
<h3 id="heading-a-macroeconomic-conditions">a) Macroeconomic Conditions</h3>
<ul>
<li><p>Interest rates change.</p>
</li>
<li><p>Funding markets freeze.</p>
</li>
<li><p>Consumer spending shifts.</p>
</li>
</ul>
<p>You can’t vote these away.</p>
<p>You can, however:</p>
<ul>
<li><p>keep fixed costs low,</p>
</li>
<li><p>maintain some runway buffer,</p>
</li>
<li><p>avoid business models that only work in perfect conditions.</p>
</li>
</ul>
<h3 id="heading-b-industry-and-technology-shifts">b) Industry and Technology Shifts</h3>
<p>Regulation, new platforms, and new technologies can:</p>
<ul>
<li><p>suddenly open huge opportunities, or</p>
</li>
<li><p>instantly kill your main advantage.</p>
</li>
</ul>
<p>Again: not controllable. The rational move isn’t denial, it’s <strong>awareness</strong>:</p>
<ul>
<li><p>follow what’s happening in your space,</p>
</li>
<li><p>understand where the power is moving,</p>
</li>
<li><p>be willing to adapt your plans when the terrain changes.</p>
</li>
</ul>
<h3 id="heading-c-luck-amp-timing">c) Luck &amp; Timing</h3>
<p>No one likes to admit how much <strong>sheer luck</strong> is involved:</p>
<ul>
<li><p>meeting exactly the right person at the right time,</p>
</li>
<li><p>one blog post or tweet going viral,</p>
</li>
<li><p>a platform feature you rely on <em>not</em> getting killed.</p>
</li>
</ul>
<p>You can’t manufacture luck on demand.</p>
<p>But you <strong>can</strong> increase your “luck surface area”:</p>
<ul>
<li><p>share what you’re building,</p>
</li>
<li><p>talk to more customers,</p>
</li>
<li><p>participate in relevant communities,</p>
</li>
<li><p>show your work in public.</p>
</li>
</ul>
<p>Each of these is a small lottery ticket. Most do nothing. A few matter a lot.</p>
<hr />
<h2 id="heading-3-timing-as-a-force-multiplier">3. Timing as a Force Multiplier</h2>
<p>A painful truth: the same idea can be a disaster in 2010 and a rocket ship in 2025.</p>
<ul>
<li><p>Infrastructure might be missing.</p>
</li>
<li><p>Habits might not be formed yet.</p>
</li>
<li><p>The tools you depend on may not exist.</p>
</li>
</ul>
<p>Or the opposite: you might be too late.</p>
<p>You can’t perfectly time markets. Nobody can. What you <em>can</em> do is:</p>
<ul>
<li><p>pay attention to how fast your space is moving,</p>
</li>
<li><p>pick problems where adoption is clearly trending up, not down,</p>
</li>
<li><p>avoid betting everything on a shrinking or stagnant behavior.</p>
</li>
</ul>
<p>Timing isn’t something you “solve”. It’s a <strong>background multiplier</strong> on everything else you do.</p>
<hr />
<h2 id="heading-4-how-rational-founders-operate-in-an-uncertain-world">4. How Rational Founders Operate in an Uncertain World</h2>
<p>If success isn’t deterministic, what do you do with that?</p>
<p>You stop looking for guarantees and start thinking like this:</p>
<blockquote>
<p><strong>“Given that outcomes are uncertain, what strategy gives me the best odds and the cheapest information?”</strong></p>
</blockquote>
<p>Here are practical tools for that.</p>
<h3 id="heading-a-lean-loops-instead-of-grand-plans">a) Lean Loops Instead of Grand Plans</h3>
<p>The classic loop:</p>
<ol>
<li><p><strong>Build</strong>: the smallest version that can test a real behavior.</p>
</li>
<li><p><strong>Measure</strong>: what people actually do, not what you hoped they’d do.</p>
</li>
<li><p><strong>Learn</strong>: keep what works, discard what doesn’t.</p>
</li>
</ol>
<p>The point isn’t “move fast and break things.” The point is:</p>
<ul>
<li><p>move <em>small</em>,</p>
</li>
<li><p>learn <em>fast</em>,</p>
</li>
<li><p>and avoid <em>big, blind</em> bets.</p>
</li>
</ul>
<h3 id="heading-b-scenario-planning">b) Scenario Planning</h3>
<p>Instead of one story about the future, sketch three:</p>
<ul>
<li><p><strong>Best case</strong>: what if everything goes unusually well?</p>
</li>
<li><p><strong>Base case</strong>: what is realistically likely with current assumptions?</p>
</li>
<li><p><strong>Worst case</strong>: what if key risks materialize?</p>
</li>
</ul>
<p>Then decide <strong>ahead of time</strong>:</p>
<ul>
<li><p>what you’ll do if you hit the best case (double down?),</p>
</li>
<li><p>how you’ll react in the base case (optimize, refine),</p>
</li>
<li><p>when you’ll stop or pivot in the worst case (clear kill criteria).</p>
</li>
</ul>
<p>This doesn’t control the future. It prevents panic and denial when reality shows up.</p>
<h3 id="heading-c-financial-flexibility">c) Financial Flexibility</h3>
<p>Runway doesn’t guarantee success, but lack of runway guarantees you don’t get more experiments.</p>
<p>Principles that help:</p>
<ul>
<li><p>keep fixed costs lower than your ego wants,</p>
</li>
<li><p>don’t hire ahead of proven demand,</p>
</li>
<li><p>assume revenue will be lumpier and slower than your optimistic spreadsheet.</p>
</li>
</ul>
<p>Cash buys you <strong>time to learn</strong>.</p>
<h3 id="heading-d-networks-amp-mentors">d) Networks &amp; Mentors</h3>
<p>A single warm intro, a piece of hard-won advice, or a reality check can save months of spinning your wheels.</p>
<p>Rational founders:</p>
<ul>
<li><p>deliberately seek out people who have done similar things,</p>
</li>
<li><p>ask specific, grounded questions,</p>
</li>
<li><p>trade ego (“I know what I’m doing”) for information (“What am I missing?”).</p>
</li>
</ul>
<p>You still own the decisions. But your decisions are now made on better data.</p>
<hr />
<h2 id="heading-5-the-mindset-shift-from-guarantees-to-probabilities">5. The Mindset Shift: From Guarantees to Probabilities</h2>
<p>If you internalize that success isn’t deterministic, a few things change.</p>
<h3 id="heading-a-failure-becomes-data-not-verdict">a) Failure Becomes Data, Not Verdict</h3>
<p>When success is seen as a guaranteed result of “doing it right”, failure feels like a personal verdict: <em>you</em> are the flaw.</p>
<p>When you see it as probabilistic, each attempt is more like:</p>
<ul>
<li><p>a noisy experiment under uncertain conditions,</p>
</li>
<li><p>with imperfect information,</p>
</li>
<li><p>and a mix of controllable and uncontrollable factors.</p>
</li>
</ul>
<p>You still take responsibility for mistakes. But you don’t pretend you could have removed all randomness from the system.</p>
<h3 id="heading-b-multiple-at-bats-matter-more-than-one-perfect-swing">b) Multiple At-Bats Matter More Than One Perfect Swing</h3>
<p>If outcomes are probabilistic, then:</p>
<ul>
<li><p>one attempt is rarely enough data,</p>
</li>
<li><p>optimizing for “never fail” is irrational,</p>
</li>
<li><p>designing your life so you can survive several tries makes more sense than aiming for the miracle.</p>
</li>
</ul>
<p>You want:</p>
<ul>
<li><p>low downside per attempt,</p>
</li>
<li><p>meaningful upside if it works,</p>
</li>
<li><p>and the ability to try again.</p>
</li>
</ul>
<h3 id="heading-c-humility-agency">c) Humility + Agency</h3>
<p>The weird, but healthy mix is:</p>
<ul>
<li><p><strong>Humility</strong>: you accept you can’t fully control timing, luck, or macro.</p>
</li>
<li><p><strong>Agency</strong>: you aggressively control what you can: skills, effort, process, focus.</p>
</li>
</ul>
<p>Neither “I control everything” nor “everything is luck” is true. The truth is in the middle, and it’s less emotionally satisfying but more useful.</p>
<hr />
<h2 id="heading-so-is-startup-success-deterministic">So… Is Startup Success Deterministic?</h2>
<p>No.</p>
<p>You can’t plug in effort, funding, and strategy and reliably get a unicorn out the other end.</p>
<p>What you <em>can</em> do is:</p>
<ul>
<li><p>dramatically <strong>improve the odds</strong> with good research, execution, and iteration,</p>
</li>
<li><p><strong>avoid obvious failure modes</strong> (no customers, no runway, no feedback, no adaptability),</p>
</li>
<li><p>structure your life and business so you can survive multiple attempts.</p>
</li>
</ul>
<p>Think of entrepreneurship less as solving a fixed equation and more as:</p>
<blockquote>
<p><strong>a repeated game of stacking small edges in an uncertain environment.</strong></p>
</blockquote>
<p>You don’t get certainty.</p>
<p>You get choices, experiments, and probabilities.</p>
<p>And over enough time, for founders who keep showing up and keep learning, those probabilities start to compound.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Real About Modern LLMs, GPUs, and Agents]]></title><description><![CDATA[1. Why bother understanding this at all?
If you’re a developer or founder, you don’t need to reinvent the math of deep learning. But you do need a solid mental model of:

what modern LLMs really are,

why they’re trained on GPUs/TPUs,

how context wi...]]></description><link>https://blog.samiralibabic.com/getting-real-about-modern-llms-gpus-and-agents</link><guid isPermaLink="true">https://blog.samiralibabic.com/getting-real-about-modern-llms-gpus-and-agents</guid><category><![CDATA[llm]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[agents]]></category><category><![CDATA[gpt]]></category><category><![CDATA[tpu]]></category><category><![CDATA[ai training]]></category><category><![CDATA[inference]]></category><category><![CDATA[Tokenization]]></category><category><![CDATA[kvcache]]></category><category><![CDATA[AI Alignment]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Wed, 03 Dec 2025 08:00:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4ZimN_iVwWo/upload/fc5e6d54519fa7af20fb6b72be15f452.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-why-bother-understanding-this-at-all">1. Why bother understanding this at all?</h2>
<p>If you’re a developer or founder, you don’t need to reinvent the math of deep learning. But you <em>do</em> need a solid mental model of:</p>
<ul>
<li><p>what modern LLMs really are,</p>
</li>
<li><p>why they’re trained on GPUs/TPUs,</p>
</li>
<li><p>how context windows, tokenization, and KV cache shape what’s possible,</p>
</li>
<li><p>what “SFT”, “RLHF”, and “agents” actually mean in practice.</p>
</li>
</ul>
<p>With that, you can:</p>
<ul>
<li><p>choose the right models and infrastructure,</p>
</li>
<li><p>design prompts and tools that <em>cooperate</em> with the model instead of fighting it,</p>
</li>
<li><p>avoid the common traps that waste tokens, time, and effort.</p>
</li>
</ul>
<p>This post is that kind of primer: opinionated, practical, and aimed at people who want to build.</p>
<hr />
<h2 id="heading-2-what-a-modern-llm-really-is">2. What a modern LLM really is</h2>
<p>At its core, a large language model is:</p>
<blockquote>
<p>A giant function that takes a sequence of tokens and predicts the probability distribution of the <strong>next</strong> token.</p>
</blockquote>
<p>Three key pieces:</p>
<ol>
<li><p><strong>Tokenization</strong></p>
<ul>
<li><p>Text is split into <strong>tokens</strong> (subwords, word pieces, sometimes whole words or symbols).</p>
</li>
<li><p>Each token is mapped to an integer ID.</p>
</li>
<li><p>Different tokenizers → different token counts → different cost and context usage.</p>
</li>
</ul>
</li>
<li><p><strong>Embeddings and Transformer blocks</strong></p>
<ul>
<li><p>Each token ID → high-dimensional vector (embedding).</p>
</li>
<li><p>These vectors pass through a stack of <strong>Transformer blocks</strong>.</p>
</li>
<li><p>Each block has:</p>
<ul>
<li><p><strong>Self-attention</strong>: each token looks at other tokens and decides who matters.</p>
</li>
<li><p><strong>Feed-forward network</strong>: a small neural net applied to each token’s representation.</p>
</li>
<li><p><strong>Residual connections + normalization</strong>: keep things stable and trainable.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Output layer</strong></p>
<ul>
<li><p>Final hidden state at each position → logits over the vocabulary.</p>
</li>
<li><p>After softmax, you get a probability distribution over the next token.</p>
</li>
<li><p>During generation, you sample repeatedly from that distribution.</p>
</li>
</ul>
</li>
</ol>
<p>The model doesn’t “think in words”. It operates entirely on vectors, but is trained in such a way that those vectors encode useful structure about language, code, and the world.</p>
<hr />
<h2 id="heading-3-how-llms-are-trained-end-to-end">3. How LLMs are trained (end to end)</h2>
<p>Training has three main phases: <strong>pretraining</strong>, <strong>post-training</strong>, and <strong>inference-time steering</strong>.</p>
<h3 id="heading-31-pretraining-turning-text-into-a-base-model">3.1 Pretraining: turning text into a base model</h3>
<p>Pretraining is where the model learns most of its knowledge and raw capabilities.</p>
<p><strong>Step 1: Data</strong></p>
<ul>
<li><p>Collect large corpora: web pages, books, documentation, code repositories, etc.</p>
</li>
<li><p>Clean and filter:</p>
<ul>
<li><p>remove obvious junk and duplicates,</p>
</li>
<li><p>filter by language,</p>
</li>
<li><p>control for some types of toxic content.</p>
</li>
</ul>
</li>
<li><p>Tokenize into integer sequences.</p>
</li>
</ul>
<p><strong>Step 2: Objective</strong></p>
<p>The core pretraining task is <strong>next-token prediction</strong>:</p>
<ul>
<li><p>Given: a sequence of tokens <code>[x1, x2, ..., xT]</code>.</p>
</li>
<li><p>Task: predict token <code>x2</code> from <code>[x1]</code>, <code>x3</code> from <code>[x1, x2]</code>, and so on.</p>
</li>
<li><p>The loss function (cross-entropy) punishes the model when it assigns low probability to the actual next token.</p>
</li>
</ul>
<p>Intuitively:</p>
<blockquote>
<p>The model gets better by being <em>less surprised</em> by real text.</p>
</blockquote>
<p>Over trillions of tokens, the easiest way to reduce surprise is to internalize grammar, common patterns, factual structure, and even rough reasoning shortcuts.</p>
<p><strong>Step 3: Optimization</strong></p>
<ul>
<li><p>Run forward pass → compute predictions and loss.</p>
</li>
<li><p>Use backpropagation to compute gradients.</p>
</li>
<li><p>Use an optimizer (e.g. AdamW) to tweak weights slightly.</p>
</li>
<li><p>Repeat this at insane scale across GPU/TPU clusters.</p>
</li>
</ul>
<p>At the end, you have a <strong>base model</strong>: great at modeling text, but not yet a polite assistant.</p>
<h3 id="heading-32-why-gpustpus-instead-of-cpus">3.2 Why GPUs/TPUs instead of CPUs</h3>
<p>Training is dominated by a small set of operations:</p>
<ul>
<li><p>large <strong>matrix multiplications</strong> (matmul) for almost every layer,</p>
</li>
<li><p>simple elementwise operations (nonlinearities, normalization),</p>
</li>
<li><p>repeated over and over on huge tensors.</p>
</li>
</ul>
<p>CPUs are designed for:</p>
<ul>
<li><p>a small number of powerful, flexible cores,</p>
</li>
<li><p>low-latency, branching-heavy workloads (OS, databases, web servers).</p>
</li>
</ul>
<p>GPUs/TPUs are designed for:</p>
<ul>
<li><p>thousands of smaller, simpler cores,</p>
</li>
<li><p>massive parallelism,</p>
</li>
<li><p>very high memory bandwidth,</p>
</li>
<li><p>specialized matrix-multiply units (tensor cores).</p>
</li>
</ul>
<p>If your job is essentially “multiply big matrices and do the same arithmetic on millions of numbers”, GPUs/TPUs will outperform CPUs by orders of magnitude in speed and energy efficiency.</p>
<p>Short version:</p>
<blockquote>
<p>Pretraining LLMs is a giant dense linear algebra problem. GPUs/TPUs are built for that; CPUs are not.</p>
</blockquote>
<h3 id="heading-33-post-training-from-raw-model-to-assistant">3.3 Post-training: from raw model to assistant</h3>
<p>Pretraining gives you a model that behaves like <strong>smart autocomplete</strong>. To turn it into a useful assistant, there are two major post-training steps.</p>
<h4 id="heading-supervised-fine-tuning-sft">Supervised fine-tuning (SFT)</h4>
<ul>
<li><p>Collect <strong>(instruction → good response)</strong> pairs.</p>
</li>
<li><p>Train the model again on this dataset.</p>
</li>
<li><p>Same next-token loss, but now:</p>
<ul>
<li><p>inputs look like user prompts/questions,</p>
</li>
<li><p>outputs are curated, high-quality answers.</p>
</li>
</ul>
</li>
</ul>
<p>This teaches the model to:</p>
<ul>
<li><p>treat the latest user message as a task to respond to,</p>
</li>
<li><p>answer clearly instead of just “continuing the text”,</p>
</li>
<li><p>follow patterns like “answer step-by-step” or “output valid JSON”.</p>
</li>
</ul>
<p>This produces the familiar <strong>instruct/chat</strong> variants.</p>
<h4 id="heading-preference-optimization-rlhf-dpo-etc">Preference optimization (RLHF, DPO, etc.)</h4>
<p>Now you optimize for <em>style and safety</em>.</p>
<ul>
<li><p>Collect data where humans compare two model responses (A vs B) and pick the better one.</p>
</li>
<li><p>Train a preference or reward model that predicts which response humans prefer.</p>
</li>
<li><p>Adjust the base model to produce responses that score better under this preference model.</p>
</li>
</ul>
<p>Effects:</p>
<ul>
<li><p>Reduces toxic and clearly harmful behavior.</p>
</li>
<li><p>Encourages helpful, polite, cautious answers.</p>
</li>
<li><p>Sometimes adds more hedging and verbosity than you personally might like.</p>
</li>
</ul>
<h3 id="heading-34-inference-how-generation-actually-works">3.4 Inference: how generation actually works</h3>
<p>At inference time:</p>
<ol>
<li><p>Prompt is tokenized.</p>
</li>
<li><p>Tokens are embedded and run through the Transformer stack.</p>
</li>
<li><p>The model outputs a probability distribution over next tokens.</p>
</li>
<li><p>A decoding strategy chooses the next token:</p>
<ul>
<li><p><strong>Greedy</strong>: always pick the most probable.</p>
</li>
<li><p><strong>Top-k</strong>: sample from the k most likely.</p>
</li>
<li><p><strong>Top-p (nucleus)</strong>: sample from the smallest set whose total probability ≥ p.</p>
</li>
<li><p><strong>Temperature</strong>: make the distribution sharper or flatter.</p>
</li>
</ul>
</li>
<li><p>Append that token and repeat.</p>
</li>
</ol>
<p>Two key knobs for you as a user:</p>
<ul>
<li><p><strong>Sampling strategy</strong> → controls creativity vs determinism.</p>
</li>
<li><p><strong>Stopping criteria</strong> → when to end (max length, special tokens, etc.).</p>
</li>
</ul>
<hr />
<h2 id="heading-4-context-windows-tokenization-and-kv-cache">4. Context windows, tokenization, and KV cache</h2>
<p>You never interact with “the model in the abstract”. You interact with a specific model that has:</p>
<ul>
<li><p>a <strong>tokenizer</strong>,</p>
</li>
<li><p>a <strong>maximum context window</strong>, and</p>
</li>
<li><p>performance tricks like <strong>KV cache</strong>.</p>
</li>
</ul>
<p>All three matter a lot in practice.</p>
<h3 id="heading-41-tokenization">4.1 Tokenization</h3>
<p>Tokenization decides:</p>
<ul>
<li><p>how many tokens your text turns into,</p>
</li>
<li><p>which sequences are “cheap” or “expensive”,</p>
</li>
<li><p>how efficiently code, emojis, non-English languages, etc. are represented.</p>
</li>
</ul>
<p>Why you should care:</p>
<ul>
<li><p>Token count drives <strong>latency and cost</strong> (on cloud APIs).</p>
</li>
<li><p>Different models can be dramatically more efficient for certain kinds of text (e.g. code-oriented tokenizers make code cheaper to process).</p>
</li>
</ul>
<h3 id="heading-42-context-window">4.2 Context window</h3>
<p>The context window is the maximum number of tokens the model can see in one go. It has to fit:</p>
<ul>
<li><p>system prompt,</p>
</li>
<li><p>user messages,</p>
</li>
<li><p>chat history,</p>
</li>
<li><p>tools’ schemas and logs (if any),</p>
</li>
<li><p>your actual task content (code, docs, etc.),</p>
</li>
<li><p>and sometimes the output itself.</p>
</li>
</ul>
<p>Implications:</p>
<ul>
<li><p>Long conversations and large docs don’t fit in raw form.</p>
</li>
<li><p>You need strategies: summarization, retrieval, trimming.</p>
</li>
<li><p>For RAG, you’re always balancing “instructions + query + retrieved chunks” against a hard limit.</p>
</li>
</ul>
<h3 id="heading-43-kv-cache-making-long-generations-feasible">4.3 KV cache: making long generations feasible</h3>
<p>Attention is quadratic in sequence length if you recompute it from scratch each time. KV cache is the standard trick to avoid that.</p>
<p>During generation:</p>
<ul>
<li><p>For each token and each layer, the model computes <strong>keys (K)</strong> and <strong>values (V)</strong>.</p>
</li>
<li><p>These are stored in a <strong>KV cache</strong>.</p>
</li>
<li><p>For each new token, the model:</p>
<ul>
<li><p>reuses Ks and Vs for all previous tokens from the cache,</p>
</li>
<li><p>computes new Q/K/V for the current token.</p>
</li>
</ul>
</li>
</ul>
<p>This keeps the cost per new token reasonable and enables efficient streaming.</p>
<h3 id="heading-44-long-context-is-not-perfect-memory">4.4 Long context is not perfect memory</h3>
<p>Even with huge context windows (e.g. 128k tokens):</p>
<ul>
<li><p>Attention is <strong>soft</strong>: tokens distribute their focus over many others.</p>
</li>
<li><p>Models tend to focus more on <strong>recent</strong> or <strong>salient</strong> tokens.</p>
</li>
<li><p>Training may be optimized more for shorter ranges.</p>
</li>
</ul>
<p>Practical strategies:</p>
<ul>
<li><p>Don’t just paste entire books or giant logs.</p>
</li>
<li><p><strong>Summarize the core</strong>: goals, key facts, definitions.</p>
</li>
<li><p><strong>Repeat crucial early information</strong> near where it’s actually needed.</p>
</li>
<li><p>For retrieval-based systems, feed the model <strong>small, relevant chunks</strong> instead of huge blobs.</p>
</li>
</ul>
<hr />
<h2 id="heading-5-how-alignment-and-prompts-interact">5. How alignment and prompts interact</h2>
<p>You can think of a modern assistant model as the sum of four layers:</p>
<ol>
<li><p><strong>Pretraining</strong> → knowledge and general patterns.</p>
</li>
<li><p><strong>SFT</strong> → instruction-following behavior.</p>
</li>
<li><p><strong>Preference optimization</strong> → style and safety.</p>
</li>
<li><p><strong>System prompt</strong> → per-session steering.</p>
</li>
</ol>
<p>When the model’s behavior feels wrong:</p>
<ul>
<li><p>If it <strong>doesn’t understand the task</strong> at all → capability issue (model too small or wrong domain).</p>
</li>
<li><p>If it <strong>understands but ignores structure</strong> (e.g. you asked for JSON, it gives prose) → prompt design or SFT limitations.</p>
</li>
<li><p>If it <strong>refuses too much or over-hedges</strong> → RLHF/alignment layer.</p>
</li>
<li><p>If it <strong>usually behaves but slips on details</strong> → fix with better system prompts, clearer instructions, or examples.</p>
</li>
</ul>
<p>System prompts are powerful but bounded:</p>
<ul>
<li><p>You can set role, style, formatting rules, tool policies.</p>
</li>
<li><p>You can’t give it knowledge it doesn’t have.</p>
</li>
<li><p>You can’t fully override strong safety constraints.</p>
</li>
</ul>
<hr />
<h2 id="heading-6-agents-llms-that-can-act-in-a-loop">6. Agents: LLMs that can act in a loop</h2>
<p>“Agentic AI” sounds mysterious, but in practice an agent is just:</p>
<blockquote>
<p>A loop where the model can <strong>plan → act (use tools) → observe → update its plan</strong>, instead of answering once and stopping.</p>
</blockquote>
<h3 id="heading-61-basic-agent-loop">6.1 Basic agent loop</h3>
<p>A minimal agent loop looks like:</p>
<ol>
<li><p>Maintain a <strong>state</strong>:</p>
<ul>
<li><p>user goal,</p>
</li>
<li><p>relevant context (files, docs, notes),</p>
</li>
<li><p>history of actions and observations.</p>
</li>
</ul>
</li>
<li><p>Call the model with the current state.</p>
</li>
<li><p>The model returns either:</p>
<ul>
<li><p>a <strong>tool call</strong> (e.g. read a file, run a command, fetch a URL), or</p>
</li>
<li><p>a <strong>final answer</strong>.</p>
</li>
</ul>
</li>
<li><p>If it’s a tool call:</p>
<ul>
<li><p>the environment runs the tool,</p>
</li>
<li><p>the result is appended to the state,</p>
</li>
<li><p>loop continues.</p>
</li>
</ul>
</li>
<li><p>If it’s done or stuck:</p>
<ul>
<li>the loop exits with a final answer or a failure report.</li>
</ul>
</li>
</ol>
<p>All popular “agent frameworks” are just structured versions of this idea.</p>
<h3 id="heading-62-tools-define-what-an-agent-can-do">6.2 Tools define what an agent can do</h3>
<p>Examples of tools:</p>
<ul>
<li><p><strong>Code agent</strong>:</p>
<ul>
<li><p><code>read_file(path)</code></p>
</li>
<li><p><code>search_files(pattern)</code></p>
</li>
<li><p><code>apply_patch(path, diff)</code></p>
</li>
<li><p><code>run_tests()</code></p>
</li>
</ul>
</li>
<li><p><strong>Research agent</strong>:</p>
<ul>
<li><p><code>web_search(query)</code></p>
</li>
<li><p><code>load_pdf(path/url)</code></p>
</li>
<li><p><code>search_notes(query)</code></p>
</li>
<li><p><code>summarize_chunk(text)</code></p>
</li>
</ul>
</li>
<li><p><strong>Ops agent</strong>:</p>
<ul>
<li><p><code>run_cmd(command)</code></p>
</li>
<li><p><code>deploy_service(config)</code></p>
</li>
<li><p>etc.</p>
</li>
</ul>
</li>
</ul>
<p>More powerful tools → more capability ↔ more risk.</p>
<h3 id="heading-63-guardrails-for-agents">6.3 Guardrails for agents</h3>
<p>Because agents can operate in loops and take actions, you need guardrails:</p>
<ul>
<li><p><strong>max_steps</strong> per task (prevent infinite loops).</p>
</li>
<li><p>Clear <strong>termination conditions</strong>.</p>
</li>
<li><p>A way to <strong>give up gracefully</strong>:</p>
<ul>
<li>“Here’s what I tried, what failed, and what’s left for a human.”</li>
</ul>
</li>
<li><p>Tool policies:</p>
<ul>
<li><p>Prefer patches over full rewrites.</p>
</li>
<li><p>Don’t run destructive commands.</p>
</li>
<li><p>Don’t fake tool outputs.</p>
</li>
</ul>
</li>
</ul>
<p>Agent behavior benefits from a good system prompt:</p>
<ul>
<li><p>Tell it to check tool results before proceeding.</p>
</li>
<li><p>Tell it to ask for confirmation before irreversible actions.</p>
</li>
<li><p>Tell it how to report failures.</p>
</li>
</ul>
<h3 id="heading-64-coding-agents-vs-research-agents">6.4 Coding agents vs research agents</h3>
<p><strong>Coding agent</strong> (e.g. in an IDE):</p>
<ul>
<li><p>Goal: operate over a repo to implement features, fix bugs, refactor.</p>
</li>
<li><p>Typical loop:</p>
<ul>
<li>inspect files → propose change → apply patch → run tests → inspect failures → iterate.</li>
</ul>
</li>
<li><p>Needs to be conservative:</p>
<ul>
<li><p>limit scope,</p>
</li>
<li><p>avoid unwanted refactors,</p>
</li>
<li><p>keep changes reviewable.</p>
</li>
</ul>
</li>
</ul>
<p><strong>Research/analysis agent</strong> (e.g. in a web UI):</p>
<ul>
<li><p>Goal: answer a complex question using multiple sources.</p>
</li>
<li><p>Typical loop:</p>
<ul>
<li>clarify question → plan steps → fetch sources → summarize chunks → build hierarchical notes → synthesize answer with citations.</li>
</ul>
</li>
<li><p>Works well with long context and many iterations, especially on local setups where you’re not billed per token.</p>
</li>
</ul>
<hr />
<h2 id="heading-7-how-to-get-more-out-of-llms-in-practice">7. How to get more out of LLMs in practice</h2>
<p>With these concepts in mind, here are some practical principles.</p>
<h3 id="heading-71-work-with-the-models-structure-not-against-it">7.1 Work with the model’s structure, not against it</h3>
<ul>
<li><p>Be aware of the <strong>token budget</strong>.</p>
</li>
<li><p>Put the most important requirements and constraints <strong>near the end of the prompt</strong> where they’re fresh.</p>
</li>
<li><p>Use <strong>summaries</strong> to keep long-term goals and context alive.</p>
</li>
<li><p>For large tasks, split into sub-tasks and use an agent-style loop instead of one giant prompt.</p>
</li>
</ul>
<h3 id="heading-72-use-the-right-model-for-the-job">7.2 Use the right model for the job</h3>
<ul>
<li><p>Bigger, more capable models for:</p>
<ul>
<li><p>difficult reasoning,</p>
</li>
<li><p>complex refactors,</p>
</li>
<li><p>high-stakes answers.</p>
</li>
</ul>
</li>
<li><p>Smaller/local models for:</p>
<ul>
<li><p>boilerplate code,</p>
</li>
<li><p>simple Q&amp;A,</p>
</li>
<li><p>summarization,</p>
</li>
<li><p>experimentation where cost would otherwise explode.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-73-when-you-move-to-agents-start-conservative">7.3 When you move to agents, start conservative</h3>
<ul>
<li><p>Start with tools that <strong>can’t destroy much</strong>:</p>
<ul>
<li><p>read-only operations,</p>
</li>
<li><p>patch-based writes,</p>
</li>
<li><p>whitelisted commands.</p>
</li>
</ul>
</li>
<li><p>Add more powerful tools only when:</p>
<ul>
<li><p>you have tests,</p>
</li>
<li><p>you trust the agent’s behavior,</p>
</li>
<li><p>and you understand the failure modes.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-74-iterate-on-system-prompts">7.4 Iterate on system prompts</h3>
<ul>
<li><p>Treat your system prompts as code:</p>
<ul>
<li><p>version them,</p>
</li>
<li><p>refine over time,</p>
</li>
<li><p>test them with real tasks.</p>
</li>
</ul>
</li>
<li><p>Make them explicit about:</p>
<ul>
<li><p>reasoning style,</p>
</li>
<li><p>output format,</p>
</li>
<li><p>safety constraints,</p>
</li>
<li><p>interaction with tools.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-8-closing-thoughts">8. Closing thoughts</h2>
<p>You don’t need to derive every formula behind Transformers to use LLMs effectively. But you <em>do</em> need a mental model like this:</p>
<ul>
<li><p>LLMs are next-token predictors trained with huge matrix multiplications on GPUs.</p>
</li>
<li><p>Context windows, tokenization, and KV cache define the practical envelope of what you can do.</p>
</li>
<li><p>Post-training (SFT + alignment) and system prompts decide <em>how</em> the model behaves for you.</p>
</li>
<li><p>Agents are just loops around the model with tools, not magic.</p>
</li>
</ul>
<p>Once you see things this way, you can make better decisions about:</p>
<ul>
<li><p>which models to use,</p>
</li>
<li><p>how to structure your prompts and contexts,</p>
</li>
<li><p>when to introduce agents and tools,</p>
</li>
<li><p>and how to balance cloud vs local inference.</p>
</li>
</ul>
<p>That’s where the real leverage is: understanding enough of the internals to design workflows that actually cooperate with the model and then letting the model do what it’s good at, while you focus on building things that matter.</p>
]]></content:encoded></item><item><title><![CDATA[Coding With AI Is a New Skill – And We’re All Wasting Time on the Wrong Decisions]]></title><description><![CDATA[If you’re using tools like Cursor, VS Code + AI extensions, or Zed with agents, you’ve probably felt this:

“I spend way too much time picking models and modes instead of actually shipping code.”

You get:

Multiple models (Composer, Claude, GPT-5, G...]]></description><link>https://blog.samiralibabic.com/coding-with-ai-is-a-new-skill-and-were-all-wasting-time-on-the-wrong-decisions</link><guid isPermaLink="true">https://blog.samiralibabic.com/coding-with-ai-is-a-new-skill-and-were-all-wasting-time-on-the-wrong-decisions</guid><category><![CDATA[coding]]></category><category><![CDATA[Assistant]]></category><category><![CDATA[vibe coding]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[agentic ai development]]></category><category><![CDATA[IDEs]]></category><category><![CDATA[cursor IDE]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Tue, 02 Dec 2025 08:17:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/EJMTKCZ00I0/upload/b3f1bfb22bf152201ecf9b2c81b78bef.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’re using tools like Cursor, VS Code + AI extensions, or Zed with agents, you’ve probably felt this:</p>
<blockquote>
<p><em>“I spend way too much time picking models and modes instead of actually shipping code.”</em></p>
</blockquote>
<p>You get:</p>
<ul>
<li><p>Multiple models (Composer, Claude, GPT-5, Grok, Qwen…)</p>
</li>
<li><p>“Thinking” vs non-thinking variants</p>
</li>
<li><p>Toggles like <em>Max mode</em>, <em>cloud vs local</em>, <em>Ask vs Agent vs Plan</em></p>
</li>
<li><p>And a UI that changes every few weeks</p>
</li>
</ul>
<p>On paper this is “control”. In practice it’s cognitive overhead.</p>
<p>This post is a first-principles way to think about coding with AI so you can stop micro-optimizing knobs and get <strong>maximum quality per buck (and per minute)</strong>.</p>
<hr />
<h2 id="heading-1-coding-with-ai-is-its-own-skill">1. Coding with AI is its own skill</h2>
<p>It’s easy to think of AI as a smarter autocomplete. It isn’t. To use coding assistants effectively you need to learn a new skillset:</p>
<ul>
<li><p><strong>Framing tasks</strong> so a model can actually solve them</p>
</li>
<li><p><strong>Choosing the right level of autonomy</strong> (advise vs edit vs refactor whole project)</p>
</li>
<li><p><strong>Knowing when to iterate and when to start over</strong></p>
</li>
<li><p><strong>Optimizing for your time, not just tokens</strong></p>
</li>
</ul>
<p>What makes this hard is that most tools expose low-level levers (model, mode, “thinking”, Max, cloud) instead of giving you a simple “do-what-I-mean” abstraction.</p>
<p>So let’s build <em>your own</em> abstraction.</p>
<hr />
<h2 id="heading-2-forget-the-ui-there-are-only-three-real-decisions">2. Forget the UI: there are only three real decisions</h2>
<p>Ignore all the product marketing and menus for a second. Under the hood, almost everything reduces to three questions:</p>
<ol>
<li><p><strong>How big / smart a model do I want?</strong><br /> (Small/cheap vs large/expensive; “thinking” vs “fast”.)</p>
</li>
<li><p><strong>How much of my repo / context does it need to see?</strong><br /> (Normal vs “Max mode” / huge context.)</p>
</li>
<li><p><strong>How much autonomy do I give it?</strong><br /> (Explain only? Edit a few files? Refactor the world? Run in the cloud without me watching?)</p>
</li>
</ol>
<p>Everything else (Ask vs Agent vs Plan, local vs cloud, etc.) is just preconfigured answers to those three.</p>
<p>Once you internalize that, the UX stops feeling like chaos. You’re just tuning:</p>
<blockquote>
<p><strong>Model size × Context budget × Autonomy</strong></p>
</blockquote>
<hr />
<h2 id="heading-3-the-costquality-tradeoff-stop-fighting-physics">3. The cost/quality tradeoff: stop fighting physics</h2>
<p>A lot of people (maybe you) are doing this pattern:</p>
<ol>
<li><p>Try with a cheaper / smaller model.</p>
</li>
<li><p>Get a mediocre answer.</p>
</li>
<li><p>Switch to a top model (Claude Sonnet, GPT-5, etc.).</p>
</li>
<li><p>Finally get something usable.</p>
</li>
</ol>
<p>On paper this “saves money”. In reality, it often doesn’t.</p>
<p>Let’s reason it out.</p>
<ul>
<li><p>Let <code>C_small</code> be the cost of a small/mid model call.</p>
</li>
<li><p>Let <code>C_big</code> be the cost of a top model call. (Say ~10× <code>C_small</code> for intuition.)</p>
</li>
<li><p>Let <code>p</code> be the fraction of tasks where you end up using the big model anyway.</p>
</li>
</ul>
<p>Two strategies:</p>
<ul>
<li><p><strong>Strategy A – Always use big model</strong><br />  Cost ≈ <code>C_big</code></p>
</li>
<li><p><strong>Strategy B – Try small, escalate if needed</strong><br />  Cost ≈ <code>C_small + p * C_big</code></p>
</li>
</ul>
<p>If <code>C_small = 1</code> and <code>C_big = 10</code>:</p>
<ul>
<li><p>Always-big → cost = 10</p>
</li>
<li><p>Try-small → cost = <code>1 + 10p</code></p>
</li>
</ul>
<p>You only <em>beat</em> always-big if:</p>
<p><code>1 + 10p &lt; 10</code> → <code>p &lt; 0.9</code></p>
<p>So if you escalate to a big model more than ~90% of the time, the “cheap first” step is pure overhead.</p>
<p>In practice:</p>
<ul>
<li><p>For <strong>hard, ambiguous tasks</strong> (understanding systems, ugly bugs, architecture), most people <em>do</em> escalate. So going straight to the best model is actually rational.</p>
</li>
<li><p>For <strong>small, mechanical tasks</strong>, you rarely escalate, so a cheaper model really can save money.</p>
</li>
</ul>
<p>The trick isn’t “optimize every call”. The trick is:</p>
<blockquote>
<p><strong>Know which tasks almost always need a top model, and stop pretending otherwise.</strong></p>
</blockquote>
<hr />
<h2 id="heading-4-classify-tasks-not-prompts">4. Classify tasks, not prompts</h2>
<p>Instead of choosing a model for each prompt, classify the <em>type of work</em> first.</p>
<h3 id="heading-bucket-1-hard-ambiguous-high-value-work">Bucket 1 – Hard / ambiguous / high-value work</h3>
<p>Examples:</p>
<ul>
<li><p>Understanding an unfamiliar subsystem end-to-end</p>
</li>
<li><p>Designing a new feature that cuts across backend, frontend, DB</p>
</li>
<li><p>Debugging a weird bug that involves async jobs, queues, and timeouts</p>
</li>
<li><p>Reasoning about tradeoffs (performance, correctness, architecture)</p>
</li>
</ul>
<p>Characteristics:</p>
<ul>
<li><p>You’d personally need a while to think</p>
</li>
<li><p>There isn’t a single obvious local fix</p>
</li>
<li><p>You expect multiple iterations and back-and-forth</p>
</li>
</ul>
<p><strong>Use case:</strong> You want a smart junior/staff engineer thinking with you.</p>
<p><strong>Strategy:</strong><br />Go <strong>straight to your best model</strong> (Claude Sonnet, GPT-5, etc.), often with “thinking” / deeper reasoning turned on.</p>
<p>Don’t bother with cheaper models here. Your own time is more expensive than the token difference.</p>
<hr />
<h3 id="heading-bucket-2-local-routine-mechanical-work">Bucket 2 – Local / routine / mechanical work</h3>
<p>Examples:</p>
<ul>
<li><p>“Add tests for this function and adapt fixtures.”</p>
</li>
<li><p>“Extract this bit of logic into a helper and update this file + its tests.”</p>
</li>
<li><p>“Generate a typed API client from this OpenAPI schema.”</p>
</li>
<li><p>“Add logging around this one endpoint.”</p>
</li>
</ul>
<p>Characteristics:</p>
<ul>
<li><p>Scope is limited to 1–3 files</p>
</li>
<li><p>Expected behavior is clear</p>
</li>
<li><p>You can quickly tell if it’s wrong just by reading the diff</p>
</li>
</ul>
<p><strong>Use case:</strong> You want a smart autocomplete that can handle slightly non-trivial patterns.</p>
<p><strong>Strategy:</strong><br />Use a <strong>cheaper, mid-tier coding model</strong> (Composer, Qwen, Grok-code, etc.) as default.</p>
<ul>
<li><p>If it nails it on the first or second try → great, you saved money.</p>
</li>
<li><p>If it struggles repeatedly for <em>this exact type of task</em>, upgrade that task type to Bucket 1 in your mental model and use a top model next time.</p>
</li>
</ul>
<hr />
<h3 id="heading-bucket-3-bulk-repo-wide-but-well-specified-work">Bucket 3 – Bulk / repo-wide but well-specified work</h3>
<p>Examples:</p>
<ul>
<li><p>Updating license headers everywhere</p>
</li>
<li><p>Renaming a logging library and updating all call sites</p>
</li>
<li><p>Performing a very specific, repetitive change across many files</p>
</li>
</ul>
<p>Characteristics:</p>
<ul>
<li><p>High file count, but low conceptual complexity</p>
</li>
<li><p>You can write a very explicit spec: exactly what changes, where</p>
</li>
</ul>
<p>Here you have two good options:</p>
<ol>
<li><p><strong>Use AI once to generate a codemod</strong>, then run that codemod with normal tooling (no AI for the bulk).</p>
</li>
<li><p><strong>Use a “Max mode” or background agent</strong>, but <em>only</em> when the spec is crystal clear and the change is mechanical.</p>
</li>
</ol>
<p>Background/cloud agents are terrible at vague, creative tasks. They are good at “update this pattern everywhere exactly like this”.</p>
<hr />
<h2 id="heading-5-model-size-thinking-and-max-mode-in-plain-language">5. Model size, “thinking” and Max mode in plain language</h2>
<p>Now let’s map the knobs to this framework.</p>
<h3 id="heading-model-size">Model size</h3>
<ul>
<li><p><strong>Big / top models</strong> (Sonnet, GPT-5, etc.):<br />  Better reasoning, more robust across weird prompts, more expensive.</p>
</li>
<li><p><strong>Mid / small models</strong> (Composer-lite, Qwen, etc.):<br />  Fine for localized code, tests, boilerplate, cheaper.</p>
</li>
</ul>
<p>Mapping:</p>
<ul>
<li><p>Bucket 1 → <strong>Always big</strong></p>
</li>
<li><p>Bucket 2 → <strong>Default to mid</strong>, escalate only if it clearly fails</p>
</li>
<li><p>Bucket 3 → Depends on whether the transformation is semantic or mechanical:</p>
<ul>
<li><p>Mechanical → mid/small or no AI at all</p>
</li>
<li><p>Semantic → big model once to define the pattern, then script/codemod</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-thinking-models">“Thinking” models</h3>
<p>A “thinking” mode basically means:</p>
<ul>
<li><p>The model spends more internal compute to reason before answering</p>
</li>
<li><p>It might run extra internal steps, plan, and then answer</p>
</li>
<li><p>It tends to be slower and more expensive, but handles harder problems</p>
</li>
</ul>
<p>Mapping:</p>
<ul>
<li><p>Bucket 1:</p>
<ul>
<li>“Thinking” <strong>ON</strong> is usually worth it for tough bugs and design questions.</li>
</ul>
</li>
<li><p>Bucket 2:</p>
<ul>
<li>“Thinking” often unnecessary – you want precision and speed.</li>
</ul>
</li>
<li><p>Bucket 3:</p>
<ul>
<li>If change is mechanical, “thinking” is overkill. If change is subtle/semantic, have the big model reason once about the pattern, then let code/scripts do the bulk.</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-max-mode-huge-context">“Max mode” / huge context</h3>
<p>“Max” is essentially:</p>
<ul>
<li><p>Give the model <strong>way more context</strong> (more of your repo + history)</p>
</li>
<li><p>Allow <strong>many more tool calls / steps</strong> per “task”</p>
</li>
<li><p>Accept <strong>higher cost and latency</strong></p>
</li>
</ul>
<p>This is not “make it smarter”; it’s “let it see and touch more at once.”</p>
<p>Mapping:</p>
<ul>
<li><p>Bucket 1:</p>
<ul>
<li>Sometimes useful if understanding a huge codebase, but dangerous if you give it too much autonomy.</li>
</ul>
</li>
<li><p>Bucket 2:</p>
<ul>
<li>Mostly unnecessary. Your scope is intentionally small.</li>
</ul>
</li>
<li><p>Bucket 3:</p>
<ul>
<li>Useful if you’re doing massive transformations and want the AI to navigate many files, as long as the spec is strict.</li>
</ul>
</li>
</ul>
<p>As a default:</p>
<ul>
<li><p><strong>Max OFF</strong> unless:</p>
<ul>
<li><p>The task is explicitly repo-wide or doc-heavy, <strong>and</strong></p>
</li>
<li><p>You already know what you want it to do.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-6-autonomy-ask-vs-agent-vs-plan-vs-do-everything-in-the-cloud">6. Autonomy: Ask vs Agent vs Plan vs “do everything in the cloud”</h2>
<p>The other dimension is: <em>how much power do you give it?</em></p>
<h3 id="heading-low-autonomy-ask-mode">Low autonomy – “Ask mode”</h3>
<ul>
<li><p>The AI explains, reviews, suggests.</p>
</li>
<li><p><strong>You</strong> apply changes manually.</p>
</li>
</ul>
<p>This is ideal for Bucket 1 (understanding, design, root-cause analysis). You want conversation, not automated edits.</p>
<h3 id="heading-medium-autonomy-agent-mode-on-a-leash">Medium autonomy – “Agent mode on a leash”</h3>
<ul>
<li><p>The AI can edit files, but you keep tasks small.</p>
</li>
<li><p>You review every diff like a PR.</p>
</li>
</ul>
<p>This fits Bucket 2: local changes where the impact is limited and obvious.</p>
<p>You can even tell an Agent: “<strong>Do not edit files yet, only propose a patch</strong>” – turning it into Ask mode with extra repo context.</p>
<h3 id="heading-high-autonomy-cloud-agent-big-plan-lots-of-freedom">High autonomy – “Cloud agent, big plan, lots of freedom”</h3>
<ul>
<li><p>The AI runs in a separate environment, possibly with Max context.</p>
</li>
<li><p>It can edit many files and run tools, often without interactive feedback.</p>
</li>
</ul>
<p>This only belongs in Bucket 3 – <em>mechanical</em> repo-wide work with a tight spec.</p>
<p>Using high-autonomy modes for Bucket 1 or 2 (“figure out this fuzzy feature idea and implement it end-to-end”) is almost guaranteed to produce garbage. You’re skipping the human-in-the-loop phase where requirements actually get nailed down.</p>
<hr />
<h2 id="heading-7-a-practical-playbook-you-can-actually-use">7. A practical playbook you can actually use</h2>
<p>You don’t need to think from scratch every time. Here’s a simple policy you can adopt:</p>
<h3 id="heading-step-1-classify-the-task">Step 1 – Classify the task</h3>
<p>Before typing the prompt, ask:</p>
<blockquote>
<p>“Is this hard/ambiguous, local/mechanical, or bulk/mechanical?”</p>
</blockquote>
<p>Then:</p>
<ul>
<li><p><strong>Hard / ambiguous → Bucket 1</strong></p>
</li>
<li><p><strong>Local / mechanical → Bucket 2</strong></p>
</li>
<li><p><strong>Bulk / mechanical → Bucket 3</strong></p>
</li>
</ul>
<h3 id="heading-step-2-apply-the-corresponding-defaults">Step 2 – Apply the corresponding defaults</h3>
<p><strong>Bucket 1</strong></p>
<ul>
<li><p><strong>Model:</strong> Best available (Sonnet/GPT-5)</p>
</li>
<li><p><strong>Thinking:</strong> ON</p>
</li>
<li><p><strong>Max:</strong> OFF by default, ON only when you really need repo-wide visibility to understand something</p>
</li>
<li><p><strong>Autonomy:</strong> Ask mode; Agent only for small, clearly scoped edits you’ve agreed on</p>
</li>
</ul>
<p><strong>Bucket 2</strong></p>
<ul>
<li><p><strong>Model:</strong> Mid-tier coding model (Composer/Qwen/etc.)</p>
</li>
<li><p><strong>Thinking:</strong> OFF (or minimal)</p>
</li>
<li><p><strong>Max:</strong> OFF</p>
</li>
<li><p><strong>Autonomy:</strong> Agent allowed, but limited to small scopes; you review diffs</p>
</li>
</ul>
<p>If it fails more than once for this sort of task, reclassify this specific pattern as Bucket 1.</p>
<p><strong>Bucket 3</strong></p>
<ul>
<li><p><strong>Model:</strong></p>
<ul>
<li><p>Use top model <strong>once</strong> to help you define a precise spec or codemod script</p>
</li>
<li><p>Use scripts/tools for the actual transformation</p>
</li>
</ul>
</li>
<li><p><strong>Thinking:</strong> ON only during spec design, OFF for mechanical execution</p>
</li>
<li><p><strong>Max:</strong> ON only if you actually need massive context or multi-file reasoning</p>
</li>
<li><p><strong>Autonomy:</strong> Cloud/background agent only for strictly specified, mechanical jobs</p>
</li>
</ul>
<hr />
<h2 id="heading-8-optimize-for-your-time-not-just-token-spend">8. Optimize for your time, not just token spend</h2>
<p>The final mental shift is this:</p>
<ul>
<li><p>Tokens are cheap compared to your time and frustration.</p>
</li>
<li><p>Chasing small token savings by trying multiple cheap models and then escalating often loses in the real world.</p>
</li>
</ul>
<p>For interactive development:</p>
<ul>
<li><p>Use the <strong>best model</strong> for anything that feels like “real engineering” (understanding, design, complex bugs).</p>
</li>
<li><p>Use <strong>cheaper models or no AI at all</strong> for tiny, obvious, mechanical things.</p>
</li>
<li><p>Use AI + scripts for big mechanical tasks, not AI alone.</p>
</li>
</ul>
<p>That’s how you actually get <strong>quality per buck</strong> – and, more importantly, quality per hour of your life.</p>
]]></content:encoded></item><item><title><![CDATA[AI Can Build Products, Not Businesses (Yet)]]></title><description><![CDATA[If AI can write code, design UI, generate content, run outreach, and analyze metrics, what exactly is left for founders and developers to do?
On paper, it looks like we are only a few prompts away from “AI‑built businesses” that launch, grow, and pri...]]></description><link>https://blog.samiralibabic.com/ai-can-build-products-not-businesses-yet</link><guid isPermaLink="true">https://blog.samiralibabic.com/ai-can-build-products-not-businesses-yet</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI]]></category><category><![CDATA[business]]></category><category><![CDATA[Products]]></category><category><![CDATA[software development]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Tue, 02 Dec 2025 08:04:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/U3sOwViXhkY/upload/c5a8ecf8ce6465fddb27b60b8419fc19.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If AI can write code, design UI, generate content, run outreach, and analyze metrics, what exactly is left for founders and developers to do?</p>
<p>On paper, it looks like we are only a few prompts away from “AI‑built businesses” that launch, grow, and print money while humans watch from the sidelines.</p>
<p>In reality, that’s not happening. Not because AI is weak, but because we are confusing <strong>building products</strong> with <strong>building businesses</strong>.</p>
<p>This article is an attempt to draw that line very clearly.</p>
<hr />
<h2 id="heading-what-ai-is-already-good-at">What AI is already good at</h2>
<p>Let’s start from the uncomfortable truth:</p>
<p>For a lot of individual tasks, AI is already as good as or better than the average developer, marketer, or solo founder.</p>
<p>Today’s models can already:</p>
<ul>
<li><p>Generate working code for full‑stack apps, including APIs, auth, and CRUD.</p>
</li>
<li><p>Fix bugs and refactor code bases.</p>
</li>
<li><p>Write documentation, tests, and basic system design.</p>
</li>
<li><p>Do market and competitor research from public data.</p>
</li>
<li><p>Write landing pages, onboarding flows, pricing pages, and onboarding emails.</p>
</li>
<li><p>Generate long‑form SEO articles, social media content, and ad copy.</p>
</li>
<li><p>Draft outbound outreach emails, follow‑ups, and support replies.</p>
</li>
</ul>
<p>If you give an AI system tool access (browser, APIs, Git, deployment, email), it can realistically:</p>
<ul>
<li><p>Pick a niche.</p>
</li>
<li><p>Build and deploy an MVP.</p>
</li>
<li><p>Fill it with content.</p>
</li>
<li><p>Set up analytics.</p>
</li>
<li><p>Start basic distribution.</p>
</li>
</ul>
<p>So the reassuring line – “don’t worry, AI can’t do X” – is getting thinner every month. The point is not that humans are better at writing code or copy. Often, they aren’t.</p>
<p>But that’s not where the real bottleneck is.</p>
<hr />
<h2 id="heading-what-a-business-actually-is">What a business actually is</h2>
<p>A lot of confusion comes from treating “I launched a thing” as equivalent to “I built a business.”</p>
<p>They are not the same.</p>
<p>A <strong>product</strong> is something you can build and ship.</p>
<p>A <strong>business</strong> is a system that:</p>
<ul>
<li><p>Solves a real problem for real people,</p>
</li>
<li><p>Reaches those people repeatedly,</p>
</li>
<li><p>Gets paid reliably,</p>
</li>
<li><p>And does all of that profitably and sustainably over time.</p>
</li>
</ul>
<p>That system includes:</p>
<ul>
<li><p>Customers and their trust.</p>
</li>
<li><p>Distribution channels.</p>
</li>
<li><p>Pricing and positioning.</p>
</li>
<li><p>Support and operations.</p>
</li>
<li><p>Brand and reputation.</p>
</li>
<li><p>Feedback loops and iteration.</p>
</li>
</ul>
<p>That is where things get slow and messy – and where AI runs into its real limits.</p>
<hr />
<h2 id="heading-the-latency-wall-execution-is-fast-trust-is-slow">The latency wall: execution is fast, trust is slow</h2>
<p>Even if an AI system can build and launch a product in a weekend, it does not magically skip the <strong>latency of external systems</strong>.</p>
<p>Some examples:</p>
<h3 id="heading-1-seo-takes-time">1. SEO takes time</h3>
<p>You can generate and publish 100 high‑quality articles in a few days. That does not mean you will get traffic in a few days.</p>
<ul>
<li><p>Search engines need to discover, crawl, and index the site.</p>
</li>
<li><p>The domain needs to build history and trust.</p>
</li>
<li><p>Backlinks need to appear and be evaluated.</p>
</li>
<li><p>User behavior signals have to accumulate.</p>
</li>
</ul>
<p>That process plays out over <strong>weeks and months</strong>, not hours.</p>
<h3 id="heading-2-social-and-word-of-mouth-are-slow-by-default">2. Social and word of mouth are slow by default</h3>
<p>You can have AI generate perfect Reddit posts, tweets, LinkedIn threads, and community replies.</p>
<p>Most of them will still get ignored.</p>
<p>Attention is a crowded, noisy market. Without an existing audience, relationships, or reputation, your posts are just more noise. Occasionally something catches and spreads. Usually it does not.</p>
<h3 id="heading-3-feedback-loops-cannot-be-rushed">3. Feedback loops cannot be rushed</h3>
<p>You only learn if your product works when enough real users:</p>
<ul>
<li><p>Find it,</p>
</li>
<li><p>Try it,</p>
</li>
<li><p>Use it for long enough,</p>
</li>
<li><p>And either stick around or churn.</p>
</li>
</ul>
<p>That learning loop looks like:</p>
<blockquote>
<p>Launch → tiny trickle of users → vague feedback → small changes → wait → observe behavior → repeat.</p>
</blockquote>
<p>It is inherently slow in the beginning because you don’t have volume. You cannot A/B test your way to clarity with 23 visitors a day.</p>
<h3 id="heading-4-trust-compounds-on-a-different-timescale">4. Trust compounds on a different timescale</h3>
<p>People don’t trust a random new domain or anonymous brand instantly.</p>
<p>Trust builds from:</p>
<ul>
<li><p>Showing up consistently.</p>
</li>
<li><p>Being around for a while.</p>
</li>
<li><p>Fixing issues instead of disappearing.</p>
</li>
<li><p>Answering emails.</p>
</li>
<li><p>Word of mouth.</p>
</li>
</ul>
<p>You can automate responses. You cannot automate <strong>“this thing has been here and reliable for a long time”</strong>. That’s just time.</p>
<hr />
<h2 id="heading-why-ai-cant-just-agent-its-way-out-of-this">Why AI can’t just “agent” its way out of this</h2>
<p>Modern AI agents can:</p>
<ul>
<li><p>Run in loops,</p>
</li>
<li><p>Break goals into tasks,</p>
</li>
<li><p>Call tools and APIs,</p>
</li>
<li><p>Read and write to external memory (via RAG or databases),</p>
</li>
<li><p>Observe outputs and try something else.</p>
</li>
</ul>
<p>On paper, that looks a lot like “agency.” In practice, several hard limits remain.</p>
<h3 id="heading-1-no-real-skin-in-the-game">1. No real skin in the game</h3>
<p>An AI system does not care if the business works.</p>
<p>It doesn’t feel the difference between:</p>
<ul>
<li><p>Zero revenue and growing revenue.</p>
</li>
<li><p>Burning runway and being profitable.</p>
</li>
<li><p>Keeping a promise to a customer and breaking it.</p>
</li>
</ul>
<p>It can be programmed to optimize metrics, but it does not have anything at stake. There is no real <strong>fear</strong>, <strong>desire</strong>, or <strong>responsibility</strong> behind its decisions.</p>
<h3 id="heading-2-no-real-judgment-under-uncertainty">2. No real judgment under uncertainty</h3>
<p>Business decisions are rarely clean optimization problems. You constantly deal with incomplete, conflicting, and noisy signals:</p>
<ul>
<li><p>Some users say they love the product, others churn silently.</p>
</li>
<li><p>Traffic goes up but revenue doesn’t.</p>
</li>
<li><p>A new feature might help or might just add complexity.</p>
</li>
</ul>
<p>Humans develop intuition over time for which signals to trust, which to ignore, and when to make a call with limited data. AI can simulate confidence, but underneath, it is still just pattern matching.</p>
<h3 id="heading-3-no-ownership-of-the-long-game">3. No ownership of the long game</h3>
<p>A real business requires:</p>
<ul>
<li><p>Signing contracts and handling liability.</p>
</li>
<li><p>Dealing with regulation and taxes.</p>
</li>
<li><p>Navigating platforms changing rules.</p>
</li>
<li><p>Choosing when to pivot, when to hold, and when to quit.</p>
</li>
</ul>
<p>All of that ultimately sits on a human or a legal entity that can be held accountable. AI can draft emails and strategies, but it doesn’t own the consequences.</p>
<hr />
<h2 id="heading-the-real-bottleneck-resilience-and-compounding">The real bottleneck: resilience and compounding</h2>
<p>If you strip away all the hype, the game looks like this:</p>
<ul>
<li><p>Building products is getting <strong>cheaper and faster</strong>.</p>
</li>
<li><p>Distribution and trust still take <strong>time and consistency</strong>.</p>
</li>
<li><p>Most people quit somewhere in that gap.</p>
</li>
</ul>
<p>This is why so many “good ideas” and “nice products” never become real businesses. Not because they were impossible. Because the builders ran out of:</p>
<ul>
<li><p>Energy,</p>
</li>
<li><p>Money,</p>
</li>
<li><p>Patience,</p>
</li>
<li><p>Or belief,</p>
</li>
</ul>
<p>before the trust and audience had time to compound.</p>
<p>You could describe the situation like this:</p>
<blockquote>
<p>You either need a resilient human, willing to grind through months of slow, ambiguous progress.</p>
<p>Or you need a machine with enough credits to run an endless loop of generations and iterations until the numbers finally move.</p>
</blockquote>
<p>Right now, only humans actually <em>care</em> about the outcome. AI will keep generating as long as you pay the bill. It will not feel the frustration of a flat analytics graph.</p>
<hr />
<h2 id="heading-where-ai-actually-changes-the-game">Where AI actually changes the game</h2>
<p>All of this does not mean “AI is overrated” or “nothing changes.” A lot changes.</p>
<p>AI dramatically reduces the cost of:</p>
<ul>
<li><p>Testing new ideas.</p>
</li>
<li><p>Rebuilding or repositioning products.</p>
</li>
<li><p>Producing content and educational material.</p>
</li>
<li><p>Responding to user feedback.</p>
</li>
<li><p>Running experiments in parallel.</p>
</li>
</ul>
<p>In other words, it makes it <strong>much more survivable</strong> to stay in the game long enough for trust and distribution to catch up.</p>
<p>You still have to endure the grind. But your iterations can be faster, cheaper, and less painful.</p>
<p>Instead of spending three months building the wrong thing, realizing it, and starting over, you might spend a weekend. That does not remove the need for resilience, but it changes the economics of trial and error.</p>
<hr />
<h2 id="heading-so-whats-left-for-founders-and-developers">So what’s left for founders and developers?</h2>
<p>If AI keeps getting better at execution, what is left for humans?</p>
<p>At least for now:</p>
<ul>
<li><p><strong>Choosing the game</strong> – what problem to work on, in which market, with which constraints.</p>
</li>
<li><p><strong>Understanding people</strong> – not as personas in a slide deck, but as messy humans with fears, desires, and habits.</p>
</li>
<li><p><strong>Building trust</strong> – showing up, keeping promises, dealing fairly with customers and partners.</p>
</li>
<li><p><strong>Holding the line</strong> – staying in long enough for the compounding to work instead of hopping to the next shiny thing.</p>
</li>
</ul>
<p>AI can be an amplifier for all of that. It can remove a lot of the busywork and technical friction. But it does not remove the need for someone to care, commit, and endure.</p>
<hr />
<h2 id="heading-closing-thoughts">Closing thoughts</h2>
<p>AI can already build products that look impressive. It can scaffold full apps, populate them with content, and even start basic marketing.</p>
<p>What it cannot do is skip the part where the world slowly decides whether any of this deserves attention, trust, and money.</p>
<p>That part still takes months or years. It still requires resilience. And it still separates people who ship one clever project from people who end up owning real businesses.</p>
<p>AI is not a replacement for that. It is leverage for the few who are willing to keep going while everyone else gives up.</p>
]]></content:encoded></item><item><title><![CDATA[The Blog Tax: Why Search Engines Punish Useful Products]]></title><description><![CDATA[We’ve quietly accepted a strange reality on the web: if you build a genuinely useful product – a SaaS app, a tool, a directory, a marketplace – search engines will often ignore you unless you also bolt on a content machine.
Not because your tool isn’...]]></description><link>https://blog.samiralibabic.com/the-blog-tax-why-search-engines-punish-useful-products</link><guid isPermaLink="true">https://blog.samiralibabic.com/the-blog-tax-why-search-engines-punish-useful-products</guid><category><![CDATA[SEO]]></category><category><![CDATA[SEO for Developers]]></category><category><![CDATA[Blogging]]></category><category><![CDATA[content creation]]></category><category><![CDATA[Search engine optimization]]></category><category><![CDATA[Digital Marketing ]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Mon, 01 Dec 2025 09:30:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/O5v8heKY4cI/upload/00c93cd94f671247c1afba918c4bcc5e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’ve quietly accepted a strange reality on the web: if you build a genuinely useful product – a SaaS app, a tool, a directory, a marketplace – search engines will often ignore you unless you also bolt on a content machine.</p>
<p>Not because your tool isn’t valuable.</p>
<p>But because it doesn’t talk enough.</p>
<p>Modern search is still, in practice, a text-first system. If you don’t publish long, optimised articles that feed this system, you’re playing with a handicap.</p>
<p>This isn’t just annoying for founders. It’s a structural flaw in how we discover things online.</p>
<hr />
<h2 id="heading-how-seo-turned-blogs-into-a-toll-booth">How SEO turned blogs into a toll booth</h2>
<p>Ask almost any SEO how to get organic traffic for a SaaS, marketplace, directory, or e‑commerce brand and the advice is painfully consistent:</p>
<blockquote>
<p>“You need a blog.”</p>
</blockquote>
<p>The logic behind that advice is simple:</p>
<ul>
<li><p>Search engines crawl and index <strong>text</strong>.</p>
</li>
<li><p>They need a lot of text to infer <strong>what your site is about</strong>, what queries it should rank for, and how authoritative it is.</p>
</li>
<li><p>Product pages, dashboards, and tools are often <strong>light on copy</strong> and heavy on interactions, data, or UI.</p>
</li>
</ul>
<p>So if your core value is:</p>
<ul>
<li><p>an uptime monitoring tool</p>
</li>
<li><p>an SEO reporting dashboard</p>
</li>
<li><p>a directory of niche suppliers</p>
</li>
<li><p>a marketplace with smart matching</p>
</li>
</ul>
<p>…you’re still told to create:</p>
<ul>
<li><p>“Top 10 tools for X in 2026” posts</p>
</li>
<li><p>“How to do X” guides</p>
</li>
<li><p>“Complete beginner’s guide to Y” articles</p>
</li>
</ul>
<p>Not because this is always the most useful thing you could do for your users.</p>
<p>But because this is what search engines understand best.</p>
<h3 id="heading-the-misalignment">The misalignment</h3>
<p>On paper, a search engine’s job is to:</p>
<blockquote>
<p>Find the most relevant and useful resource for a given query.</p>
</blockquote>
<p>In practice, what often ranks is:</p>
<ul>
<li><p>the <strong>best‑optimised article</strong> about a tool, not the tool itself;</p>
</li>
<li><p>the website with <strong>more long‑form content</strong>, not necessarily the better product;</p>
</li>
<li><p>whoever has invested more into content and SEO, not whoever actually solves the problem best.</p>
</li>
</ul>
<p>That gap between <em>“best resource”</em> and <em>“best SEO asset”</em> is the blog tax. If you build something valuable but light on text, you either pay that tax – or stay invisible.</p>
<hr />
<h2 id="heading-why-tools-and-systems-lose-to-text">Why tools and systems lose to text</h2>
<p>Think about the kinds of products that get under‑rewarded in this system:</p>
<ul>
<li><p>A web app that does one thing extremely well with a clean, minimal UI.</p>
</li>
<li><p>A directory whose value is in the <strong>data</strong> and filtering, not the prose around it.</p>
</li>
<li><p>A piece of infrastructure or automation that integrates into someone’s workflow and “just works.”</p>
</li>
</ul>
<p>From a user’s perspective, these are ideal. They are:</p>
<ul>
<li><p>focused</p>
</li>
<li><p>fast</p>
</li>
<li><p>low on noise</p>
</li>
</ul>
<p>From a search engine’s perspective, they are often:</p>
<ul>
<li><p>thin on crawlable content</p>
</li>
<li><p>hard to classify</p>
</li>
<li><p>weak on traditional on‑page signals</p>
</li>
</ul>
<p>Crawlers don’t “experience” the product the way a human user does. They mostly see:</p>
<ul>
<li><p>how much text is on the page</p>
</li>
<li><p>how it’s structured (headings, paragraphs, lists)</p>
</li>
<li><p>what other sites link to it</p>
</li>
<li><p>what metadata you provide</p>
</li>
</ul>
<p>If your standout feature is an interactive dashboard, a smart recommendation engine, or a slick workflow, none of that is easily expressible in plain HTML text without you <strong>writing about it</strong>.</p>
<p>So you end up doing exactly that: writing about your product in longform, not because users need more words, but because search engines do.</p>
<hr />
<h2 id="heading-this-pattern-isnt-new-meta-keywords-deja-vu">This pattern isn’t new: meta keywords déjà vu</h2>
<p>We’ve seen this play out before.</p>
<p>In the early days of SEO, <strong>meta keywords</strong> were a thing. You told search engines what your page was about using a simple tag. It didn’t take long for people to:</p>
<ul>
<li><p>stuff every possible keyword in there</p>
</li>
<li><p>add irrelevant keywords to siphon traffic</p>
</li>
<li><p>use it as a cheat‑code rather than a description</p>
</li>
</ul>
<p>The result was predictable:</p>
<ul>
<li><p>the signal became noisy and unreliable</p>
</li>
<li><p>search engines started discounting it</p>
</li>
<li><p>eventually, major engines simply <strong>ignored meta keywords</strong> altogether</p>
</li>
</ul>
<p>Any explicit, easy‑to‑manipulate ranking signal follows a similar arc:</p>
<ol>
<li><p>It’s introduced with good intentions.</p>
</li>
<li><p>It’s discovered by SEOs.</p>
</li>
<li><p>It’s exploited and overused.</p>
</li>
<li><p>It’s discounted or abandoned.</p>
</li>
</ol>
<p>Structured data and schema markup already show the early symptoms of this cycle. They are extremely useful in theory – a machine‑readable way to describe what’s on a page – but they’re also being:</p>
<ul>
<li><p>used to inflate review stars</p>
</li>
<li><p>applied in contexts where they don’t really fit</p>
</li>
<li><p>turned into yet another surface for keyword games</p>
</li>
</ul>
<p>The underlying problem isn’t any single feature. It’s the incentive structure: as long as <strong>clear, mechanical levers</strong> exist, people will pull them as hard as possible.</p>
<hr />
<h2 id="heading-the-perverse-incentive-write-more-not-build-better">The perverse incentive: write more, not build better</h2>
<p>This dynamic leads to a strange economy of effort:</p>
<ul>
<li><p>You can ship a brilliant tool that saves users hours every week.</p>
</li>
<li><p>But if you don’t also produce thousands of words of “strategic” content, search may never send anyone to see it.</p>
</li>
</ul>
<p>So founders and teams do what the system rewards:</p>
<ul>
<li><p>spin up content calendars</p>
</li>
<li><p>produce “SEO articles” that rehash the same advice everyone else has</p>
</li>
<li><p>write posts that exist primarily so some keyword has a place to live</p>
</li>
</ul>
<p>Meanwhile, users searching for answers often end up:</p>
<ul>
<li><p>wading through generic content</p>
</li>
<li><p>being told about tools instead of directly finding and using them</p>
</li>
<li><p>losing time on pages that are optimised for bots, not humans</p>
</li>
</ul>
<p>The search engine technically did its job – it found a page that “matches” the query. But from a human perspective, the result feels… off. You asked for <strong>a solution</strong> and got <strong>an article about solutions</strong>.</p>
<hr />
<h2 id="heading-what-a-better-search-system-would-look-like">What a better search system would look like</h2>
<p>If we started from the user’s perspective instead of the crawler’s constraints, search would behave differently.</p>
<p>A better system would:</p>
<ul>
<li><p>Rank <strong>tools by usefulness</strong>, not just the blogs that mention them.</p>
</li>
<li><p>Understand that a minimal product page can still represent the best possible answer to a query.</p>
</li>
<li><p>Use more than just text length and keyword usage as proxies for quality.</p>
</li>
</ul>
<p>Concretely, that could mean leaning more on:</p>
<ul>
<li><p><strong>User behaviour</strong>: Do people who land on this tool actually stay, use it, and return?</p>
</li>
<li><p><strong>Direct signals of utility</strong>: For web apps, things like repeat usage, task completion, or satisfaction (where measurable).</p>
</li>
<li><p><strong>Richer understanding of the page</strong>: Using modern AI models to interpret layout, UI, and intent – not just count words and headings.</p>
</li>
</ul>
<p>At that point, a small, focused uptime monitoring tool with a tight, honest landing page could outrank yet another “Top 37 uptime monitoring tools you need in 2026” article.</p>
<p>We’re seeing early hints of this direction with:</p>
<ul>
<li><p>AI answers layered on top of web results</p>
</li>
<li><p>more emphasis on “helpful content” and less tolerance for obvious fluff</p>
</li>
<li><p>experiments in surfacing apps, tools, and answers more directly</p>
</li>
</ul>
<p>But for now, the old incentives still dominate.</p>
<hr />
<h2 id="heading-so-what-should-founders-do-today">So what should founders do today?</h2>
<p>If you’re building a SaaS, a directory, a marketplace, or a specialised tool, you’re stuck in a hybrid reality:</p>
<ul>
<li><p>The system is flawed.</p>
</li>
<li><p>But you still need to operate within it.</p>
</li>
</ul>
<p>A few pragmatic guidelines:</p>
<ol>
<li><p><strong>Accept that some text is necessary.</strong><br /> You don’t need to turn your product into a content farm, but your site needs enough descriptive, structured text that a search engine can understand what you do and who you help.</p>
</li>
<li><p><strong>Make content genuinely useful.</strong><br /> If you’re going to write, write things that:</p>
<ul>
<li><p>genuinely help your ideal users</p>
</li>
<li><p>explain your approach, trade‑offs, and constraints</p>
</li>
<li><p>show real examples, case studies, or data</p>
</li>
</ul>
</li>
</ol>
<p>    Use content to bridge the gap between your product and the problems it solves, not just to chase keywords.</p>
<ol start="3">
<li><p><strong>Lean on other discovery channels.</strong><br /> Search isn’t the only way to be found. For many tools, it may not even be the best channel at the start. Consider:</p>
<ul>
<li><p>niche communities (forums, subreddits, Discords)</p>
</li>
<li><p>integrations and partnerships</p>
</li>
<li><p>directories, comparison sites, and marketplaces</p>
</li>
<li><p>targeted outreach or small pilot programs</p>
</li>
</ul>
</li>
</ol>
<p>    These can get real users in front of your product long before Google “decides” you exist.</p>
<ol start="4">
<li><p><strong>Design your site for humans first, crawlers second.</strong><br /> It’s still worth doing the basics right – titles, headings, internal links, structured data where appropriate – but not at the expense of clarity and usability for real people.</p>
</li>
<li><p><strong>Keep an eye on how search evolves.</strong><br /> As AI‑driven search and answer engines mature, the bias toward walls of text will probably weaken. If and when that happens, lean products that are already great for users will be in a stronger position than content‑bloated ones.</p>
</li>
</ol>
<hr />
<h2 id="heading-the-blog-tax-is-real-but-its-not-the-endgame">The blog tax is real, but it’s not the endgame</h2>
<p>It’s frustrating – and a bit absurd – that in 2025 the easiest way to make a product discoverable by search is still to wrap it in longform text.</p>
<p>Many of the most valuable resources on the web are:</p>
<ul>
<li><p>small, sharp tools</p>
</li>
<li><p>carefully curated directories</p>
</li>
<li><p>systems that automate something boring and painful</p>
</li>
</ul>
<p>Those things should be first‑class citizens in search, not second‑class ones forced to drag a blog behind them just to be seen.</p>
<p>Until search engines catch up, most of us will keep paying the blog tax in one form or another. But it’s worth remembering that this is a limitation of the current system, not a law of nature.</p>
<p>The end goal shouldn’t be “who wrote the longest article about X.”<br />It should be: <strong>who actually solves the problem best.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Which Investments Give the Most Return for the Least Risk?]]></title><description><![CDATA[Most people chase the highest return. Very few ask the more important question:

“How much return am I getting for each unit of risk I take?”

This is the basic idea behind things like the Sharpe ratio. But you don’t need formulas to use the concept....]]></description><link>https://blog.samiralibabic.com/which-investments-give-the-most-return-for-the-least-risk</link><guid isPermaLink="true">https://blog.samiralibabic.com/which-investments-give-the-most-return-for-the-least-risk</guid><category><![CDATA[finance]]></category><category><![CDATA[Investing]]></category><category><![CDATA[ETF]]></category><category><![CDATA[stocks]]></category><category><![CDATA[Wealth]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Sat, 29 Nov 2025 23:00:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/zzQdGf1cv18/upload/3e067752a78285c8944a00d7835f7c9b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people chase the highest return. Very few ask the more important question:</p>
<blockquote>
<p>“How much return am I getting for each unit of risk I take?”</p>
</blockquote>
<p>This is the basic idea behind things like the Sharpe ratio. But you don’t need formulas to use the concept. You only need to compare <strong>average long‑term returns</strong> with <strong>how wildly those returns move around</strong>.</p>
<p>Below is a simple way to do that for normal, accessible investments.</p>
<hr />
<h2 id="heading-1-the-idea-return-per-unit-of-risk">1. The idea: return per unit of risk</h2>
<p>Two facts:</p>
<ul>
<li><p>A government bond at 3% with almost no drama can be better than a crypto coin at 30% with constant crashes.</p>
</li>
<li><p>A slightly lower return can be <strong>objectively better</strong> if it comes with much less risk.</p>
</li>
</ul>
<p>To make that concrete, we’ll look at:</p>
<ul>
<li><p><strong>Average annual return</strong> – what the asset class has roughly delivered over long periods.</p>
</li>
<li><p><strong>Risk</strong> – how volatile it is (how much it swings up and down).</p>
</li>
<li><p><strong>Return ÷ risk</strong> – a simple “bang for risk” score.</p>
</li>
</ul>
<p>This is just a simplified version of the Sharpe ratio without the math. The goal is not precision, but <strong>ranking</strong>: which assets historically gave more reward per unit of pain.</p>
<p>Numbers below are rounded ballpark figures from long‑term historical data (mainly US, because that’s where the best data exists). They’re not forecasts.</p>
<hr />
<h2 id="heading-2-accessible-asset-classes-compared">2. Accessible asset classes compared</h2>
<p>We’ll only look at things a normal private investor can actually buy:</p>
<ul>
<li><p>Government bonds</p>
</li>
<li><p>Corporate bonds (via ETFs)</p>
</li>
<li><p>Broad stock market (via ETFs)</p>
</li>
<li><p>Real estate (via REITs)</p>
</li>
<li><p>Individual stocks</p>
</li>
<li><p>Commodities</p>
</li>
<li><p>Cryptocurrencies</p>
</li>
</ul>
<p>Here is the simplified table:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Asset class</td><td>Avg annual return</td><td>Risk (volatility)</td><td>Return ÷ Risk</td></tr>
</thead>
<tbody>
<tr>
<td>Corporate bonds (ETF)</td><td>4.5%</td><td>6.5%</td><td><strong>0.69</strong></td></tr>
<tr>
<td>Broad equity ETF</td><td>8%</td><td>13.5%</td><td><strong>0.59</strong></td></tr>
<tr>
<td>Government bonds</td><td>2.5%</td><td>4.5%</td><td>0.56</td></tr>
<tr>
<td>REITs (listed real estate)</td><td>9%</td><td>16%</td><td>0.56</td></tr>
<tr>
<td>Individual stocks</td><td>9%</td><td>17.5%</td><td>0.51</td></tr>
<tr>
<td>Cryptocurrencies</td><td>30%</td><td>90%</td><td>0.33</td></tr>
<tr>
<td>Commodities</td><td>5%</td><td>22.5%</td><td>0.22</td></tr>
</tbody>
</table>
</div><p>Again: this is <strong>one</strong> simplified view over long periods, not a guarantee about the future. But some patterns are very robust.</p>
<hr />
<h2 id="heading-3-what-the-table-actually-tells-you">3. What the table actually tells you</h2>
<h3 id="heading-31-corporate-bond-etfs-the-quiet-workhorse">3.1 Corporate bond ETFs: the quiet workhorse</h3>
<p>Corporate bond ETFs sit in an interesting place:</p>
<ul>
<li><p>Return: higher than government bonds.</p>
</li>
<li><p>Risk: much lower than stocks.</p>
</li>
<li><p>Result: the <strong>highest return per unit of risk</strong> in our comparison.</p>
</li>
</ul>
<p>You’re lending money to many companies at once, not betting on a single stock. The ETF wrapper gives you diversification across issuers, sectors and maturities.</p>
<p>For a private investor, that often translates into:</p>
<ul>
<li><p>More yield than cash or government bonds.</p>
</li>
<li><p>Less gut‑wrenching volatility than stocks.</p>
</li>
</ul>
<p>If you only took one idea from this article, it could be: <strong>corporate bond ETFs are an underrated middle ground between safety and growth.</strong></p>
<hr />
<h3 id="heading-32-broad-equity-etfs-the-longterm-engine">3.2 Broad equity ETFs: the long‑term engine</h3>
<p>Broad equity ETFs (for example, funds tracking large diversified indices) are the classic long‑term growth tool.</p>
<ul>
<li><p>Historically strong returns (around 8% real/nominal depending on the market and period).</p>
</li>
<li><p>More volatile than bonds, but you are owning pieces of productive businesses.</p>
</li>
<li><p>On a return‑per‑risk basis, they score well, just behind corporate bonds in our simplified table.</p>
</li>
</ul>
<p>For most long‑term investors, broad equity ETFs are the <strong>growth engine</strong> of the portfolio.</p>
<hr />
<h3 id="heading-33-government-bonds-and-reits-stability-and-income">3.3 Government bonds and REITs: stability and income</h3>
<p>Government bonds:</p>
<ul>
<li><p>Lower returns but also lower volatility.</p>
</li>
<li><p>Their role is not to make you rich, but to <strong>buffer shocks</strong> and provide a modest yield.</p>
</li>
</ul>
<p>REITs (listed real estate):</p>
<ul>
<li><p>Equity‑like volatility, property‑linked income.</p>
</li>
<li><p>Slightly better returns historically than broad stocks in some periods, but also sensitive to interest rates.</p>
</li>
</ul>
<p>Both sit in the middle of the pack on a return‑per‑risk basis. They are useful <strong>ingredients</strong>, not magic bullets.</p>
<hr />
<h3 id="heading-34-individual-stocks-concentrated-bets">3.4 Individual stocks: concentrated bets</h3>
<p>Individual stocks have:</p>
<ul>
<li><p>Similar average returns to the broad market.</p>
</li>
<li><p>Higher single‑name risk (you can go to zero).</p>
</li>
</ul>
<p>Your return‑per‑risk ratio can be great <strong>if</strong> you pick well. But that “if” is non‑trivial. For most people, concentrated stock picking <em>reduces</em> the overall return‑per‑risk of their portfolio compared to just holding a global ETF.</p>
<hr />
<h3 id="heading-35-commodities-and-crypto-lots-of-drama-little-efficiency">3.5 Commodities and crypto: lots of drama, little efficiency</h3>
<p>Commodities:</p>
<ul>
<li><p>Often volatile.</p>
</li>
<li><p>Long‑term returns aren’t that impressive once you adjust for inflation and roll costs.</p>
</li>
<li><p>In our table, they have the worst return‑per‑risk ratio.</p>
</li>
</ul>
<p>Cryptocurrencies:</p>
<ul>
<li><p>Enormous upside in some years.</p>
</li>
<li><p>Enormous drawdowns in others.</p>
</li>
<li><p>Even if you assume very high average returns, the volatility is so extreme that the return‑per‑risk ratio stays mediocre.</p>
</li>
</ul>
<p>These can have a place as <strong>small, speculative satellites</strong>, not as the core where you want efficient risk‑adjusted returns.</p>
<hr />
<h2 id="heading-4-a-simple-way-to-use-this-in-your-portfolio">4. A simple way to use this in your portfolio</h2>
<p>Once you think in terms of “return per unit of risk”, your portfolio design gets simpler:</p>
<ol>
<li><p><strong>Pick your core growth engine</strong><br /> For most people, that’s a <strong>broad, low‑cost equity ETF</strong>.</p>
</li>
<li><p><strong>Add a stabilizer with a strong return‑per‑risk profile</strong><br /> Corporate bond ETFs are a natural candidate here. They:</p>
<ul>
<li><p>Improve your overall ratio.</p>
</li>
<li><p>Provide income.</p>
</li>
<li><p>Smooth the ride when stocks are volatile.</p>
</li>
</ul>
</li>
<li><p><strong>Use government bonds or cash for shock absorbers</strong><br /> If you need short‑term liquidity or hate deep drawdowns, you can add some government bonds or a money market fund.</p>
</li>
<li><p><strong>Keep speculative stuff small</strong><br /> Crypto, single commodities, or very concentrated stock bets can stay in a <strong>small “fun” bucket</strong>. Assume that money can go to zero without ruining your plan.</p>
</li>
</ol>
<p>You don’t need to compute exact Sharpe ratios or optimize every decimal. The point is to avoid obviously inefficient choices: lots of drama for not much extra return.</p>
<hr />
<h2 id="heading-5-the-core-message">5. The core message</h2>
<p>If you only remember one thing, make it this:</p>
<blockquote>
<p>Don’t just ask “what can make the most money?”<br />Ask “what gives the most return <strong>per unit of risk</strong> for a normal investor?”</p>
</blockquote>
<p>Viewed through that lens, <strong>broad equity ETFs plus corporate bond ETFs</strong> already put you in a very efficient part of the risk‑return spectrum. Everything else is fine tuning and personal preference on top.</p>
]]></content:encoded></item><item><title><![CDATA[Your Odds of Making Money With Websites (And How to Tilt Them)]]></title><description><![CDATA[Most advice about "making money online" quietly assumes something dangerous:

If you just work hard and follow the right steps, you’ll succeed.

Reality is less friendly and more statistical.
You are not just building a website. You are entering a pr...]]></description><link>https://blog.samiralibabic.com/your-odds-of-making-money-with-websites-and-how-to-tilt-them</link><guid isPermaLink="true">https://blog.samiralibabic.com/your-odds-of-making-money-with-websites-and-how-to-tilt-them</guid><category><![CDATA[Monetization]]></category><category><![CDATA[websites]]></category><category><![CDATA[building]]></category><category><![CDATA[shipping]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Sat, 29 Nov 2025 11:00:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4kIM7ED8F1A/upload/724528a0f18ad45c7ccbcf72dbb1fff2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most advice about "making money online" quietly assumes something dangerous:</p>
<blockquote>
<p>If you just work hard and follow the right steps, you’ll succeed.</p>
</blockquote>
<p>Reality is less friendly and more statistical.</p>
<p>You are not just building a website. You are entering a probabilistic game with low base odds, heavy skew, and a few levers you can actually control.</p>
<p>This post is about those odds – and how to think like a founder who plays the game rationally instead of emotionally.</p>
<hr />
<h2 id="heading-what-does-success-even-mean">What Does “Success” Even Mean?</h2>
<p>First we need a clear definition. Let’s call a project <strong>“successful”</strong> if:</p>
<ul>
<li><p>It reaches <strong>at least $500 net profit per month</strong></p>
</li>
<li><p>It does so <strong>within 24 months</strong> of starting</p>
</li>
<li><p>It sustains that level for <strong>at least six consecutive months</strong></p>
</li>
</ul>
<p>This isn’t a unicorn. It’s a modest, but meaningful, bootstrapped win.</p>
<p>If you’re honest, this is already where most projects fail.</p>
<hr />
<h2 id="heading-why-the-base-odds-are-low">Why the Base Odds Are Low</h2>
<p>Ignore the success stories for a second.</p>
<p>Most small digital projects quietly die. They never pass a few hundred visitors per month, never find a working monetization model, or never make it past the founder’s exhaustion.</p>
<p>From everything we know about startups, indie projects and niche sites, a <strong>single attempt</strong> at building a profitable website probably has single digit odds.</p>
<p>Let’s use a simple, conservative range:</p>
<blockquote>
<p><strong>Initial chance of success per attempt: 1–5 percent</strong></p>
</blockquote>
<p>That sounds depressing, but it’s also liberating. It forces you to stop thinking in terms of <em>one big bet</em> and start thinking like a statistician.</p>
<hr />
<h2 id="heading-the-four-multipliers-of-success">The Four Multipliers of Success</h2>
<p>Instead of treating success as magic, we can break it down into four independent questions:</p>
<ol>
<li><p><strong>Niche Fit</strong> – Did you pick a space where you actually have a chance?</p>
</li>
<li><p><strong>Model Fit</strong> – Are you monetizing in a way that matches how value flows in that space?</p>
</li>
<li><p><strong>Execution Quality</strong> – Is the thing you built any good at all by market standards?</p>
</li>
<li><p><strong>Time Survival</strong> – Do you stay in the game long enough for compounding to work?</p>
</li>
</ol>
<p>Think of your probability of success <strong>P(S)</strong> as a product of these four factors:</p>
<pre><code class="lang-text">P(S) ≈ P(Niche Fit) × P(Model Fit) × P(Execution Quality) × P(Time Survival)
</code></pre>
<p>You don’t need precise numbers. The point is that if any of these terms is close to zero, the whole product is close to zero.</p>
<p>Let’s look at each one.</p>
<hr />
<h2 id="heading-1-niche-fit-are-you-even-in-the-right-arena">1. Niche Fit: Are You Even in the Right Arena?</h2>
<p>Most founders underestimate how much <strong>the starting position</strong> matters.</p>
<p>Questions to ask:</p>
<ul>
<li><p>Are people already actively searching for this?</p>
</li>
<li><p>Are they spending money on this problem right now?</p>
</li>
<li><p>Who dominates the search results and mindshare?</p>
</li>
</ul>
<p>Rough signals of <strong>poor niche fit</strong>:</p>
<ul>
<li><p>All main keywords are brutally competitive and dominated by huge brands</p>
</li>
<li><p>The people who care about the topic don’t have much money or urgency</p>
</li>
<li><p>The problem is real, but the buyer is unclear or scattered</p>
</li>
</ul>
<p>Rough signals of <strong>good niche fit</strong>:</p>
<ul>
<li><p>There is clear search demand, but SERPs are fragmented and mediocre</p>
</li>
<li><p>You can name specific, reachable customer types who already spend money</p>
</li>
<li><p>You can articulate a painful problem in one or two sentences without jargon</p>
</li>
</ul>
<p>The uncomfortable truth: if you choose a bad niche, you can execute extremely well and still lose.</p>
<hr />
<h2 id="heading-2-model-fit-are-you-getting-paid-in-the-right-way">2. Model Fit: Are You Getting Paid in the Right Way?</h2>
<p>Even if there’s demand, you can still kill your odds with the wrong monetization model.</p>
<p>Very roughly, the web offers five base models:</p>
<ol>
<li><p><strong>Direct sales</strong> – products, services, info products</p>
</li>
<li><p><strong>Lead generation</strong> – collect leads and sell them or close them yourself</p>
</li>
<li><p><strong>Ads</strong> – sell attention (CPM, CPC)</p>
</li>
<li><p><strong>Affiliate</strong> – send buyers to someone else for a cut</p>
</li>
<li><p><strong>Subscriptions / SaaS</strong> – recurring value, recurring revenue</p>
</li>
</ol>
<p>Questions to ask:</p>
<ul>
<li><p>Will the people I attract <strong>actually pay</strong> in this context?</p>
</li>
<li><p>Is my monetization path <strong>aligned</strong> with their journey?</p>
</li>
<li><p>What is my <strong>revenue per 1 000 visitors (RPM)</strong> likely to be in this niche?</p>
</li>
</ul>
<p>Examples of <strong>bad model fit</strong>:</p>
<ul>
<li><p>Trying to monetize low-intent curiosity traffic with a high-ticket B2B offer</p>
</li>
<li><p>Relying on ads in a tiny niche with almost no advertisers</p>
</li>
<li><p>Selling a subscription in a space where the problem is rare and episodic</p>
</li>
</ul>
<p>Examples of <strong>good model fit</strong>:</p>
<ul>
<li><p>High-intent, “ready to buy” search traffic to a comparison site with affiliate deals</p>
</li>
<li><p>Painful recurring problem solved with a lightweight subscription tool</p>
</li>
<li><p>Local intent traffic monetized via lead gen for businesses with high ticket size</p>
</li>
</ul>
<p>If niche fit says “this is worth solving,” model fit says “this is worth paying for, in this way.”</p>
<hr />
<h2 id="heading-3-execution-quality-does-your-system-actually-work">3. Execution Quality: Does Your System Actually Work?</h2>
<p>You can nail both niche and model and still fail if your execution is weak.</p>
<p>Execution is not about code elegance. It is about whether your system actually converts reality into money.</p>
<p>Some brutally simple execution questions:</p>
<ul>
<li><p>Can people understand what you do in <strong>five seconds</strong> on the homepage?</p>
</li>
<li><p>Is your site fast enough not to bleed users? (LCP under ~2.5s)</p>
</li>
<li><p>Are you at least in the ballpark of normal conversion rates for your model?</p>
</li>
<li><p>Are you running experiments regularly, or just “hoping”?</p>
</li>
</ul>
<p>Execution quality is where many technical founders are surprised. They build, deploy, and then wait – but they do not iterate on copy, onboarding, pricing, or UX with discipline.</p>
<p>Low execution quality doesn’t just reduce your revenue. It destroys your <strong>learning speed</strong>, which is even worse.</p>
<hr />
<h2 id="heading-4-time-survival-will-you-still-be-here-when-it-starts-working">4. Time Survival: Will You Still Be Here When It Starts Working?</h2>
<p>Even with a good niche, a viable model, and decent execution, you can still fail for a simple reason:</p>
<blockquote>
<p>You run out of time, money, or motivation before compounding kicks in.</p>
</blockquote>
<p>This is the most underrated factor.</p>
<p>Survival questions:</p>
<ul>
<li><p>How many months of runway do you have at your current burn rate?</p>
</li>
<li><p>How many focused hours per week do you realistically put into this?</p>
</li>
<li><p>What’s your psychological tolerance for slow progress?</p>
</li>
</ul>
<p>A lot of founders technically “could” succeed, but their effective <strong>P(Time Survival)</strong> is near zero. They pivot, burn out, or give up before any flywheel spins up.</p>
<hr />
<h2 id="heading-a-simple-example-two-founders-same-idea">A Simple Example: Two Founders, Same Idea</h2>
<p>Imagine two people launching the same kind of site: a niche comparison website monetized with affiliate deals.</p>
<h3 id="heading-founder-a">Founder A</h3>
<ul>
<li><p>Picks a niche with <strong>huge search volume</strong> but brutal competition (credit cards, VPNs, etc.)</p>
</li>
<li><p>Monetizes with affiliate offers (good model fit in theory)</p>
</li>
<li><p>Execution is mediocre: generic content, slow site, little CRO</p>
</li>
<li><p>Gives up after 8 months of weak results</p>
</li>
</ul>
<p>Very roughly:</p>
<ul>
<li><p>Niche Fit: poor → low probability</p>
</li>
<li><p>Model Fit: decent → medium</p>
</li>
<li><p>Execution: weak → low</p>
</li>
<li><p>Time Survival: weak → low</p>
</li>
</ul>
<p>Multiply four “low” factors and the end result is basically <strong>noise-level odds</strong>.</p>
<h3 id="heading-founder-b">Founder B</h3>
<ul>
<li><p>Picks a smaller, underserved niche with clear intent (e.g. specialized B2B tools)</p>
</li>
<li><p>Monetizes with a mix of affiliate + lead gen to a few partners</p>
</li>
<li><p>Obsesses over speed, clarity, and conversion</p>
</li>
<li><p>Commits to 18–24 months and ships every week</p>
</li>
</ul>
<p>Now the picture looks very different:</p>
<ul>
<li><p>Niche Fit: good → medium/high</p>
</li>
<li><p>Model Fit: aligned → medium/high</p>
</li>
<li><p>Execution: improving → medium</p>
</li>
<li><p>Time Survival: strong → medium/high</p>
</li>
</ul>
<p>The original base rate hasn’t changed. But this founder has systematically pushed each term up. The final probability of success is still not guaranteed, but it’s no longer a lottery ticket.</p>
<hr />
<h2 id="heading-thinking-in-portfolios-instead-of-single-bets">Thinking in Portfolios Instead of Single Bets</h2>
<p>Here’s the key mental shift:</p>
<p>If a single website attempt has, say, a 5 percent chance of hitting your definition of success, then you shouldn’t be thinking, “How do I make this one work at all costs?”</p>
<p>You should be thinking:</p>
<blockquote>
<p>How do I take <strong>multiple, informed shots</strong> while increasing my odds on each new attempt?</p>
</blockquote>
<p>Very roughly, if each project had a 5 percent independent chance of success, then building ten projects over a few years gives you:</p>
<pre><code class="lang-text">P(at least one success) = 1 − (1 − 0.05)¹⁰ ≈ 40%
</code></pre>
<p>Reality is messier than this toy model, but the principle stands:</p>
<ul>
<li><p>One shot feels romantic, but is statistically brutal.</p>
</li>
<li><p>A portfolio of shots with <strong>learning carried forward</strong> is rational.</p>
</li>
</ul>
<p>Each new attempt should:</p>
<ul>
<li><p>Use a better-chosen niche</p>
</li>
<li><p>Use a better-matched model</p>
</li>
<li><p>Be executed with more skill and faster feedback loops</p>
</li>
<li><p>Be structured in a way that doesn’t destroy your runway and motivation</p>
</li>
</ul>
<hr />
<h2 id="heading-how-to-estimate-where-you-stand-without-fancy-math">How to Estimate Where You Stand (Without Fancy Math)</h2>
<p>You don’t need a PhD to use this thinking. You just need a honest self-assessment.</p>
<p>For each of the four dimensions, rate yourself from 1 to 5:</p>
<ol>
<li><p><strong>Niche Fit</strong> – From “no idea if anyone will pay for this” (1) to “clear pain and clear payers” (5).</p>
</li>
<li><p><strong>Model Fit</strong> – From “I slapped on ads or affiliate links and hope” (1) to “my monetization matches how people already buy” (5).</p>
</li>
<li><p><strong>Execution Quality</strong> – From “it barely works and I don’t track anything” (1) to “I ship fast, measure, and improve regularly” (5).</p>
</li>
<li><p><strong>Time Survival</strong> – From “I have a few months before I must quit” (1) to “I can keep going for years without destroying my life” (5).</p>
</li>
</ol>
<p>You don’t need exact probabilities. The important part is:</p>
<ul>
<li><p>If any dimension is at <strong>1</strong>, treat it as an emergency.</p>
</li>
<li><p>If you’re sitting at <strong>2–3</strong> in all four, expect slow and fragile progress.</p>
</li>
<li><p>If you’re moving <strong>3→4→5</strong> across the board over time, your personal base rate is going up.</p>
</li>
</ul>
<hr />
<h2 id="heading-what-this-means-for-founders">What This Means for Founders</h2>
<p>Most founders are told some version of: "Believe in yourself and never give up."</p>
<p>What you actually need is closer to this:</p>
<ul>
<li><p>Accept that a single attempt has low odds.</p>
</li>
<li><p>Break success into <strong>niche, model, execution, survival</strong>.</p>
</li>
<li><p>Systematically raise each term instead of hoping.</p>
</li>
<li><p>Design your life and finances so you can take <strong>multiple shots</strong>.</p>
</li>
</ul>
<p>When you do that, you stop playing the role of the hero in a startup fairy tale and start acting like an engineer of probability.</p>
<p>You cannot control the outcome. But you can control the structure of the game you’re playing.</p>
<p>And if you keep improving that structure, project after project, at some point the numbers quietly shift in your favor.</p>
]]></content:encoded></item><item><title><![CDATA[The Garage Myth 2.0 – What Jobs, Gates and Zuck Actually Did (and What Still Works Now)]]></title><description><![CDATA[We love the story: a kid in a garage or dorm room, no money, no connections, just a wild idea that somehow becomes Apple, Microsoft or Facebook. It is such a powerful narrative that it can do two opposite things at once:

Make you believe you are one...]]></description><link>https://blog.samiralibabic.com/the-garage-myth-20-what-jobs-gates-and-zuck-actually-did-and-what-still-works-now</link><guid isPermaLink="true">https://blog.samiralibabic.com/the-garage-myth-20-what-jobs-gates-and-zuck-actually-did-and-what-still-works-now</guid><category><![CDATA[startup]]></category><category><![CDATA[Founder]]></category><category><![CDATA[Bootstrapping]]></category><category><![CDATA[Business growth ]]></category><category><![CDATA[business]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Fri, 28 Nov 2025 09:49:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/qYMkkREOHa4/upload/83813a055a2ad9450bbdcec12d38949b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We love the story: a kid in a garage or dorm room, no money, no connections, just a wild idea that somehow becomes Apple, Microsoft or Facebook. It is such a powerful narrative that it can do two opposite things at once:</p>
<ul>
<li><p>Make you believe you are one genius all-nighter away from changing the world.</p>
</li>
<li><p>Make you feel like if you have not started a unicorn in your parents’ garage by 20, you are already too late.</p>
</li>
</ul>
<p>Both reactions miss the point.</p>
<p>The interesting part is not the garage. The interesting part is the <em>system behind the garage</em>: the constraints, behaviors and decisions that keep showing up in these stories. Those are the few things you can actually copy.</p>
<p>This is a practical look at what Jobs, Gates, Musk and Zuckerberg actually did in their early years, what still works in 2026, and how to apply it if you are a regular, non billionaire human starting from an apartment, dorm or kitchen table.</p>
<hr />
<h2 id="heading-1-what-actually-happened-in-those-garages-and-dorms">1. What actually happened in those garages and dorms</h2>
<p>The myth: a couple of geniuses disappear into a garage, emerge with a world changing product, the rest is history.</p>
<p>The reality is more boring and more useful.</p>
<h3 id="heading-apple-a-hobbyist-kit-not-a-revolution">Apple: a hobbyist kit, not a revolution</h3>
<p>In 1976, Steve Jobs and Steve Wozniak based their new company, Apple Computer, out of the garage of Jobs’ parents in Los Altos. That part of the story is true.</p>
<p>The part that gets lost: the garage was not some magical lab. Wozniak later said they did “no designs there… no manufacturing there.” The garage was cheap space: a place to store parts, assemble boards, and load finished Apple I computers into the car to deliver to a local computer shop.</p>
<p>The first Apple product was not a polished Macintosh. It was a bare circuit board sold to hobbyists. They got a purchase order from a local retailer, built 50 boards as fast and as cheaply as they could, collected cash, and only then could they think about a second run.</p>
<p>Key point: <strong>the garage was a cost saving hack wrapped around a very small, very specific product for a niche audience.</strong></p>
<h3 id="heading-microsoft-tiny-office-one-big-customer">Microsoft: tiny office, one big customer</h3>
<p>In 1975, Bill Gates and Paul Allen did not start with a giant vision board about “a computer in every home.” They saw a single opportunity: a new kit computer called the Altair 8800 that had no good programming language.</p>
<p>They wrote a BASIC interpreter, convinced the manufacturer to license it, and suddenly had a real customer on day one. The company that became Microsoft ran out of a tiny office in Albuquerque. The setup was closer to a scrappy agency than a modern startup campus.</p>
<p>Key point: <strong>they started with one clear customer and one clear product, and kept costs absurdly low while they shipped.</strong></p>
<h3 id="heading-elon-musk-sleeping-in-the-office-to-keep-burn-near-zero">Elon Musk: sleeping in the office to keep burn near zero</h3>
<p>In 1995, Elon and Kimbal Musk started Zip2, an online city guide. They had almost no money, so they simply removed the line item called “living expenses.”</p>
<p>They rented a small office, slept on the floor, showered at the YMCA and used a single computer as both development machine and server. They hired commission only salespeople to sell listings door to door. The moment there was a little cash, they put it into the product and sales, not comfort.</p>
<p>Key point: <strong>their “garage” was a deliberate decision to make burn rate almost zero and buy time to figure things out.</strong></p>
<h3 id="heading-facebook-one-dorm-one-campus-then-the-graph-expands">Facebook: one dorm, one campus, then the graph expands</h3>
<p>In 2004, Mark Zuckerberg launched Thefacebook from his Harvard dorm room. The product was tiny in scope: profiles, photos, basic social graph – for one campus only.</p>
<p>The magic was not the code. It was the <em>fit</em> with a very specific environment: elite universities where people cared a lot about social status and were already online all day.</p>
<p>Adoption was explosive inside that bubble. Only after it worked at Harvard did they expand to other universities, then other networks, then the open web. Funding and a real office came after that, not before.</p>
<p>Key point: <strong>Facebook started as a narrow, almost toy sized product that fit one social context perfectly.</strong></p>
<hr />
<h2 id="heading-2-the-real-pattern-behind-the-myths">2. The real pattern behind the myths</h2>
<p>If you strip away the hero worship, these stories share a boring but powerful pattern:</p>
<ol>
<li><p><strong>A big wave in its early phase</strong><br /> Personal computers in the 70s, the web in the 90s, social networking in the 2000s. The founders were early on a wave that was going to grow with or without them.</p>
</li>
<li><p><strong>A small, sharp wedge into that wave</strong></p>
<ul>
<li><p>Apple: a hobbyist computer kit for local nerds.</p>
</li>
<li><p>Microsoft: BASIC for one specific computer.</p>
</li>
<li><p>Zip2: yellow pages for newspapers.</p>
</li>
<li><p>Facebook: a social directory for Harvard students.</p>
</li>
</ul>
</li>
<li><p><strong>Ridiculously low burn</strong><br /> Free spaces (garages, dorms, apartments), no salaries at first, no nice offices, almost no fixed costs. Comfort was postponed on purpose.</p>
</li>
<li><p><strong>Fast, imperfect shipping</strong><br /> They shipped something that worked, not something that was done. Bugs, hacks and ugly design were tolerated as long as real users could use it.</p>
</li>
<li><p><strong>A tight feedback loop with real users</strong><br /> Many of those early users were friends, classmates or local customers you could talk to in person. Feedback was fast and brutal.</p>
</li>
<li><p><strong>Relentless iteration and a stupid amount of work</strong><br /> Long hours, context switching between coding, selling, support and fundraising. Not glamorous. Just grind.</p>
</li>
<li><p><strong>Some luck, multiplied by action</strong><br /> Being early on the right wave is luck. Turning that into a business is work. Many people were “around” the same opportunities. Very few actually built something and stuck with it.</p>
</li>
</ol>
<p>You cannot copy the luck or the exact timing. You <em>can</em> copy the wedge, the low burn, the shipping cadence and the feedback loop.</p>
<hr />
<h2 id="heading-3-what-has-changed-and-what-has-not-2026-reality">3. What has changed and what has not (2026 reality)</h2>
<p>The world you are starting in today is not the world of 1976 or even 2004.</p>
<h3 id="heading-what-is-harder-now">What is harder now</h3>
<ul>
<li><p><strong>Competition and noise.</strong><br />  Every idea has a dozen clones on Product Hunt before lunch. Users are flooded with apps and pitches.</p>
</li>
<li><p><strong>Distribution is the bottleneck.</strong><br />  Building a product is easier than ever. Getting attention and trust is the hard part.</p>
</li>
<li><p><strong>Incumbents are stronger.</strong><br />  Big tech has network effects, data and capital. They can copy features quickly if you grow enough to be noticed.</p>
</li>
</ul>
<h3 id="heading-what-is-easier-now">What is easier now</h3>
<ul>
<li><p><strong>Infrastructure is almost free at the start.</strong><br />  You get hosting, databases, version control, payments, email and AI via APIs for pocket change.</p>
</li>
<li><p><strong>Knowledge is free.</strong><br />  Tutorials, open source, forums and AI tools make it possible to learn and build much faster than previous generations.</p>
</li>
<li><p><strong>Global reach from day one.</strong><br />  You do not need a local hardware store or a single campus. Your first users can be in five countries by accident.</p>
</li>
</ul>
<p>So the modern “garage” is not a physical room. It is a cheap laptop, a stable internet connection, a Stripe account, a GitHub repo and a couple of SaaS subscriptions. The pattern, however, is the same: pick a wave, find a wedge, keep burn low, iterate in public.</p>
<hr />
<h2 id="heading-4-a-simple-system-you-can-actually-run">4. A simple system you can actually run</h2>
<p>Let us turn the pattern into something concrete you can run from an apartment, dorm or kitchen table.</p>
<p>Think of it as a 6–18 month system, not a weekend hackathon.</p>
<h3 id="heading-step-1-pick-your-wave-and-wedge">Step 1: Pick your wave and wedge</h3>
<ol>
<li><p><strong>Choose a growing wave you understand at least a little.</strong><br /> Examples: AI tooling, developer productivity, creator economy, niche SaaS, vertical marketplaces, remote work tooling.</p>
</li>
<li><p><strong>Define a stupidly narrow wedge into that wave.</strong><br /> Instead of “AI for e commerce,” think “AI that turns product feeds into ready to post social content for Etsy sellers,” or “queue management SaaS for vets in small clinics,” not “healthcare platform.”</p>
</li>
</ol>
<p>Litmus tests for a good wedge:</p>
<ul>
<li><p>You can describe the user and problem in one sentence.</p>
</li>
<li><p>You could plausibly reach the first 50 users manually (email, DMs, communities).</p>
</li>
</ul>
<h3 id="heading-step-2-fix-your-constraints-on-purpose">Step 2: Fix your constraints on purpose</h3>
<p>Garage stories are mostly stories about constraints. Use that as a feature.</p>
<p>Write down your rules:</p>
<ul>
<li><p><strong>Time budget.</strong> How many hours per week can you realistically invest for at least a year?</p>
</li>
<li><p><strong>Money budget.</strong> How much are you willing to burn per month without panicking? Assume zero external funding.</p>
</li>
<li><p><strong>Comfort sacrifices.</strong> What are you willing to trade temporarily? Nights, weekends, side gigs, lifestyle upgrades?</p>
</li>
</ul>
<p>Then design your setup to enforce low burn:</p>
<ul>
<li><p>Work from home or shared spaces.</p>
</li>
<li><p>Use free or cheap tools until they hurt.</p>
</li>
<li><p>Avoid fixed costs: employees, long term contracts, big office, custom hardware.</p>
</li>
</ul>
<p>The goal is to buy yourself runway: 12–18 months to experiment without going broke.</p>
<h3 id="heading-step-3-build-the-smallest-thing-that-proves-the-point">Step 3: Build the smallest thing that proves the point</h3>
<p>Your first goal is not to build “the app.” Your first goal is to prove that:</p>
<blockquote>
<p>“This specific type of user has this specific problem and is willing to use something clunky if it solves it.”</p>
</blockquote>
<p>That means:</p>
<ul>
<li><p>Build a thin vertical slice: one core workflow that goes from input to outcome.</p>
</li>
<li><p>Accept ugly UI, manual steps, duct taped backends.</p>
</li>
<li><p>Use no code or scripts if they get you there faster.</p>
</li>
</ul>
<p>If you cannot ship a thin slice in 4–6 weeks with your current constraints, the scope is probably too big.</p>
<h3 id="heading-step-4-put-it-in-front-of-real-users-fast">Step 4: Put it in front of real users fast</h3>
<p>Stop polishing alone. Pick a small number of real people:</p>
<ul>
<li><p>Friends or colleagues who match the target user.</p>
</li>
<li><p>People in niche communities (Reddit, Discord, forums, Slack groups).</p>
</li>
<li><p>Small businesses or creators you already know.</p>
</li>
</ul>
<p>Offer them something extremely clear:</p>
<ul>
<li>“I built X that does Y for people like you. Want to try it for free if I fix issues quickly?”</li>
</ul>
<p>Watch them use it. Take notes. Ask dumb questions:</p>
<ul>
<li><p>“What did you expect here?”</p>
</li>
<li><p>“What was annoying?”</p>
</li>
<li><p>“What are you doing today instead of this?”</p>
</li>
</ul>
<p>Do not argue. Do not pitch. Just observe.</p>
<h3 id="heading-step-5-iterate-or-kill-and-recycle">Step 5: Iterate, or kill and recycle</h3>
<p>Every 4–6 weeks, make a brutal decision:</p>
<ul>
<li><p><strong>Double down:</strong> Users are coming back on their own, telling others, or asking for more.</p>
</li>
<li><p><strong>Pivot the wedge:</strong> Problem seems real, but your approach misses. Try a different angle for the same user.</p>
</li>
<li><p><strong>Kill it:</strong> Users do not care enough, and you are pushing a rope. Archive the code, keep the lessons, move on.</p>
</li>
</ul>
<p>The garage myth hides how often early projects died or pivoted before the “success story” started. Treat small failures as part of the system, not as identity.</p>
<h3 id="heading-step-6-only-then-think-about-scale-and-funding">Step 6: Only then think about scale and funding</h3>
<p>Funding, offices, hiring and PR are all second order. In the old stories they came <em>after</em> there was a working wedge:</p>
<ul>
<li><p>Apple got serious investment after the Apple II started selling.</p>
</li>
<li><p>Facebook got its first check after strong engagement on US campuses.</p>
</li>
<li><p>Airbnb got into Y Combinator after they showed desperate but effective cereal box hustle.</p>
</li>
</ul>
<p>Your version:</p>
<ul>
<li><p>Once you have a small group of users who would be upset if you shut down, you have options.</p>
</li>
<li><p>You can slowly charge them.</p>
</li>
<li><p>You can show real usage graphs to potential investors.</p>
</li>
<li><p>Or you can keep it as a calm, profitable small business.</p>
</li>
</ul>
<p>There is no law that says you must swing for a trillion dollar outcome. “A solid, weird little SaaS that pays for your life” is a perfectly valid endgame.</p>
<hr />
<h2 id="heading-5-how-long-does-it-actually-take">5. How long does it actually take?</h2>
<p>Looking at the famous cases is misleading. The narrative jumps from “garage” to “IPO” in a few paragraphs.</p>
<p>A more honest expectation:</p>
<ul>
<li><p><strong>0–6 months:</strong> learning the domain, talking to users, shipping first ugly versions.</p>
</li>
<li><p><strong>6–18 months:</strong> either you find a real wedge with traction, or you accumulate a few failed attempts and a lot of scar tissue.</p>
</li>
<li><p><strong>3–5 years:</strong> enough time for one good wedge to turn into a stable business if you keep at it.</p>
</li>
</ul>
<p>There are outliers who explode faster. There are many more who quietly grind for years.</p>
<p>The point is not to predict the timeline. The point is to accept up front that you are signing up for a multi year game, not a viral weekend.</p>
<hr />
<h2 id="heading-6-so-is-it-luck-or-a-system">6. So, is it luck or a system?</h2>
<p>Both.</p>
<ul>
<li><p>You do not control the macro wave, the exact timing or who you happen to meet.</p>
</li>
<li><p>You do control whether you pick a growing space, define a narrow wedge, keep burn low, ship fast and keep talking to users.</p>
</li>
</ul>
<p>If you do the second group well, you increase your odds that when luck shows up, there is something for it to multiply.</p>
<p>The mistake is to either:</p>
<ul>
<li><p>Dismiss the legends as “just lucky,” and therefore learn nothing, or</p>
</li>
<li><p>Treat their biographies as step by step instructions you must copy, including the turtlenecks and sleeping on the floor.</p>
</li>
</ul>
<p>A saner approach:</p>
<ul>
<li><p>Copy their <strong>constraints</strong> (low burn, small wedge, real users, long runway).</p>
</li>
<li><p>Copy their <strong>behaviors</strong> (shipping, iteration, discomfort tolerance).</p>
</li>
<li><p>Do <strong>not</strong> copy their specific market, aesthetic or personal mythology.</p>
</li>
</ul>
<p>Your version of the “garage” will look different. It might be a cramped flat, a dorm bed with a laptop, or a kitchen table after the kids are asleep.</p>
<p>That is fine. The building blocks are the same: pick a wave, carve a wedge, keep burn low, ship often, stay in the game long enough for luck to notice you.</p>
<p>That is as close to a system as these stories will ever give you.</p>
]]></content:encoded></item><item><title><![CDATA[What I Learned Building Print2Social: Social Automation Isn’t Enough for Store Traffic, So Here’s Our Next Chapter]]></title><description><![CDATA[Over the last year I’ve been building Print2Social, a tool designed to automate social media content for print-on-demand (POD) and indie e-commerce store owners. The goal was to help shop owners like myself spend less time on content creation and mor...]]></description><link>https://blog.samiralibabic.com/what-i-learned-building-print2social-social-automation-isnt-enough-for-store-traffic-so-heres-our-next-chapter</link><guid isPermaLink="true">https://blog.samiralibabic.com/what-i-learned-building-print2social-social-automation-isnt-enough-for-store-traffic-so-heres-our-next-chapter</guid><category><![CDATA[#PrintOnDemand]]></category><category><![CDATA[ecommerce]]></category><category><![CDATA[social media marketing]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Thu, 10 Jul 2025 12:52:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752151838151/ab9a5896-660d-451f-a9df-6d0d90ae3886.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the last year I’ve been buildin<a target="_blank" href="https://print2social.com/">g</a> <a target="_blank" href="https://print2social.com/">Print2Soci</a><a target="_blank" href="https://print2social.com/">al</a>, a tool designed to automate social media content for print-on-demand (POD) and indie e-commerce store owners. The goal was to help shop owners like myself spend less time on content creation and more time building their business, with the hope that automated posts on Instagram, Facebook, and TikTok would help grow our brands and bring real traffic to our stores.</p>
<p>After months of real-world testing, here’s what I found:</p>
<h3 id="heading-the-reality-social-automation-brings-views-not-store-traffic">The Reality: Social Automation Brings Views, Not Store Traffic</h3>
<p>I connected Print2Social to Facebook, Instagram, and TikTok and let it auto-post hundreds of pieces of content, including lifestyle images, product highlights, and short-form videos. Some TikTok videos reached over 700 views, but none of it translated into real store traffic, even after optimizing captions, hashtags, and link-in-bio strategies.</p>
<p>Social media is designed to keep users inside the app. Even if you gain views or followers, very few people will actually click out to your website. For indie store owners, automated posting alone just does not drive targeted, purchase-ready traffic.</p>
<h3 id="heading-why-im-pivoting-a-new-solution-for-actual-sales">Why I’m Pivoting: A New Solution for Actual Sales</h3>
<p>When I started Print2Social, my mission was to help small shop owners build real brands and get their first customers, not just social “vanity metrics.” I now realize that posting content is not enough. What really matters is creating a direct path from content to purchase.</p>
<p>So I am pivoting Print2Social in a new direction with shoppable social commerce as the focus.</p>
<p><strong>What’s next for Print2Social?</strong><br />We are building a tool that lets POD brands and indie shop owners easily publish shoppable social content by syncing their product catalogs and automating tagged posts across Meta Shops, TikTok Shop, and Pinterest Shopping. This will streamline the path from content to in-app purchase. When a customer sees your post, they can buy your product right inside TikTok, Instagram, or Pinterest without leaving the app.</p>
<p>Here’s what we are working on now</p>
<ul>
<li><p>Sync your POD catalog (Shopify, WooCommerce, Printful, Printify) to all major social commerce platforms</p>
</li>
<li><p>Auto-generate media assets with product tags so every Reel, Story, Pin, or TikTok is instantly shoppable</p>
</li>
<li><p>Schedule and publish your shoppable content at the best times for your audience</p>
</li>
</ul>
<p>This is our answer to the “views but no clicks” problem and I believe it will help brands turn engagement into actual revenue, not just followers.</p>
<h3 id="heading-my-advice-to-other-founders">My Advice to Other Founders</h3>
<p>Do not mistake social reach for meaningful traffic or sales.</p>
<p>Test your assumptions with real data, even if it means admitting when something is not working<br />Be willing to pivot toward solutions that actually solve the core problem for your users</p>
<p>If you are a store owner or builder who has struggled with this, or want to get early access to the new <a target="_blank" href="https://print2social.com">Print2Social</a>, let’s talk. Early beta is available at https://beta.print2social.com if you want to check it out.</p>
]]></content:encoded></item><item><title><![CDATA[How I Use AI Writing Tools for SEO: My Experience and Lessons Learned]]></title><description><![CDATA[AI writing tools have become an essential part of my workflow as a solo founder running multiple projects. Over the last year, I’ve experimented with various AI content generators for different purposes, from building blogs that bring organic traffic...]]></description><link>https://blog.samiralibabic.com/how-i-use-ai-writing-tools-for-seo-my-experience-and-lessons-learned</link><guid isPermaLink="true">https://blog.samiralibabic.com/how-i-use-ai-writing-tools-for-seo-my-experience-and-lessons-learned</guid><category><![CDATA[SEO]]></category><category><![CDATA[#content marketing]]></category><category><![CDATA[AI Writer]]></category><category><![CDATA[Blogging]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Sun, 16 Mar 2025 16:29:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742142720485/0aafa867-09e0-4fb9-be58-2a21f5aa7754.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI writing tools have become an essential part of my workflow as a solo founder running multiple projects. Over the last year, I’ve experimented with various AI content generators for different purposes, from building blogs that bring organic traffic to scaling content marketing for my directories and SaaS tools. In this article, I want to share my personal experiences with AI writing platforms like <strong>SEObot</strong>, <strong>Koala Writer</strong>, and <strong>Perplexity Deep Research</strong>, along with thoughts on others like <strong>ContentShake AI</strong>, <strong>Scalenut</strong>, and <strong>Outrank</strong>, which I’ve explored but haven’t fully adopted yet. I hope this will give you a more practical perspective on how these tools perform in real-world use cases, beyond the feature lists.</p>
<h1 id="heading-how-ai-writing-tools-fit-into-my-business"><strong>How AI Writing Tools Fit Into My Business</strong></h1>
<p>Running several businesses, including <a target="_blank" href="https://print2social.com/">Print2Social</a>, <a target="_blank" href="https://www.printondemandbusiness.com/">PrintOnDemandBusiness.com</a>, and <a target="_blank" href="https://affiliatecompanies.net/">AffiliateCompanies.net</a>, content marketing is crucial for attracting traffic. I don’t have the luxury to write every piece myself, so AI tools help me scale content production without sacrificing too much on quality.</p>
<h1 id="heading-seobot-fully-autonomous-and-surprisingly-effective"><strong>SEObot: Fully Autonomous and Surprisingly Effective</strong></h1>
<p>For <strong>Print2Social’s blog</strong>, I tested <strong>SEObot</strong> (<a target="_blank" href="https://seobotai.com/?aff=samir">seobotai.com</a>*), and what surprised me most was how hands-off it was. I set the topics, and the system took care of the rest, from keyword research to internal linking and even some traffic analysis.</p>
<p>Some of the articles written by SEObot, like “<a target="_blank" href="https://print2social.com/blog/print-on-demand-automation/">Print on Demand Automation</a>”, actually brought traffic and <strong>ranked well</strong>. The major upside? It <strong>saved me hours of work</strong>, and I could focus on other parts of the business.</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>Fully autonomous: create content with almost no manual effort</p>
</li>
<li><p>Built-in keyword research and internal linking</p>
</li>
<li><p>Traffic and performance monitoring</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Limited editing capabilities (but that’s the point I guess)</p>
</li>
<li><p>Not ideal if you want a very personalized brand voice</p>
</li>
</ul>
<h1 id="heading-koala-writer-user-friendly-and-high-quality-long-form-content"><strong>Koala Writer: User-Friendly and High-Quality Long-Form Content</strong></h1>
<p>For <strong>PrintOnDemandBusiness.com</strong> and <strong>AffiliateCompanies.net</strong>, I relied a lot on <strong>Koala Writer</strong> (<a target="_blank" href="https://koala.sh/?via=samir-alibabic">koala.sh</a>*). Articles like “<a target="_blank" href="https://www.printondemandbusiness.com/blog/best-print-on-demand-sites-top-platforms-for-custom-product-creators-in-2024/">Best Print on Demand Sites</a>” and “<a target="_blank" href="https://affiliatecompanies.net/blog/start-an-affiliate-program-key-steps-for-launching-your-own/">Start an Affiliate Program</a>” were written using Koala and performed <strong>exceptionally well.</strong></p>
<p>What I love about Koala is its <strong>balance between quality and ease of use</strong>. The interface is straightforward, and content feels human-like, especially when using GPT-4 (though you burn through credits faster). Koala’s ability to pull in data from the top-ranking articles also helps create <strong>relevant and SEO-friendly</strong> content. I paid <strong>$45 for 10,000 credits</strong>, which made it an affordable starting point.</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>High-quality content that doesn’t sound robotic</p>
</li>
<li><p>Easy to use, clean interface</p>
</li>
<li><p>Offers a lot of other features besides article writing</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Linking can sometimes point to unrelated content if not carefully reviewed</p>
</li>
<li><p>GPT-4 content uses more credits (higher cost)</p>
</li>
</ul>
<h1 id="heading-perplexity-deep-research-when-you-need-facts-not-fluff"><strong>Perplexity Deep Research: When You Need Facts, Not Fluff</strong></h1>
<p>When I needed <strong>research-backed content</strong>, like this <a target="_blank" href="https://www.printondemandbusiness.com/blog/gelato-print-on-demand-a-comprehensive-review/">review of Gelato</a>, I turned to <strong>Perplexity’s Deep Research</strong>. What impressed me is how <strong>thorough</strong> it is: it finds real sources, provides citations, and lets me verify the data.</p>
<p>While it’s not an “out-of-the-box” blog post generator like SEObot or Koala, <strong>Perplexity is invaluable when accuracy and depth matter</strong>. I also used it for social media content ideation and understanding complex SEO topics.</p>
<p><strong>Pros:</strong></p>
<ul>
<li><p>Research-backed, factual content with citations</p>
</li>
<li><p>Great for expert-level articles or sensitive topics</p>
</li>
<li><p>Free option available with limitations</p>
</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li><p>Takes more time to generate content (2–4 minutes)</p>
</li>
<li><p>Requires manual crafting to turn research into a polished post</p>
</li>
</ul>
<h1 id="heading-other-tools-on-my-radar"><strong>Other Tools on My Radar</strong></h1>
<p>Besides the tools I’ve used extensively, I’ve researched and considered <strong>ContentShake AI</strong>, <strong>Scalenut</strong>, and <strong>Outrank</strong>:</p>
<ul>
<li><p><strong>ContentShake AI (by Semrush)</strong>: Looks powerful for SEO agencies and businesses already using Semrush. It’s a bit on the pricey side ($60/month) but could be worth it for integrated keyword analysis and optimization.</p>
</li>
<li><p><strong>Scalenut</strong>: Strong on SERP analysis and SEO scoring. Probably great for agencies, but a steeper learning curve. I haven’t adopted it yet because of time constraints.</p>
</li>
<li><p><strong>Outrank (</strong><a target="_blank" href="https://outrank.so/?via=samir">outrank.so</a><strong>*)</strong>: Promising as an all-in-one platform that combines content and image generation. However, it is also on the pricey side at <strong>$99/month</strong>.</p>
</li>
</ul>
<h1 id="heading-what-worked-best-for-me-and-why"><strong>What Worked Best for Me and Why</strong></h1>
<ul>
<li><p>For <strong>hands-off content creation</strong>, <strong>SEObot</strong> delivered quick wins for Print2Social. Perfect when I didn’t have time to manage writers or edit content myself.</p>
</li>
<li><p>For <strong>long-form, high-quality posts</strong>, <strong>Koala Writer</strong> won. Articles over <strong>2,000 words</strong> from Koala perform <strong>very well</strong>, and it’s great when you want a human-like tone.</p>
</li>
<li><p>For <strong>research-backed content</strong>, <strong>Perplexity Deep Research</strong> is my go-to tool when I need <strong>accurate, factual information</strong> to build authority.</p>
</li>
</ul>
<h1 id="heading-lessons-learned-and-final-thoughts"><strong>Lessons Learned and Final Thoughts</strong></h1>
<ul>
<li><p><strong>AI is not a silver bullet</strong>. You still need to <strong>review and edit</strong> the output to ensure quality and relevance, especially links and citations.</p>
</li>
<li><p><strong>Mixing tools works best</strong>. I use different tools for different projects and article types, for example, Perplexity for research-heavy pieces, Koala for long-form content, and SEObot when I want a fully autonomous solution.</p>
</li>
</ul>
<h1 id="heading-my-top-performing-ai-generated-articles"><strong>My Top Performing AI-Generated Articles</strong></h1>
<ul>
<li><p><a target="_blank" href="https://www.printondemandbusiness.com/blog/best-print-on-demand-sites-top-platforms-for-custom-product-creators-in-2024/">Best Print on Demand Sites</a> — Koala Writer</p>
</li>
<li><p><a target="_blank" href="https://www.printondemandbusiness.com/blog/top-print-on-demand-trends-2025/">Print on Demand Trends 2025</a> — Koala Writer</p>
</li>
<li><p><a target="_blank" href="https://affiliatecompanies.net/blog/start-an-affiliate-program-key-steps-for-launching-your-own/">Start an Affiliate Program</a> — Koala Writer</p>
</li>
<li><p><a target="_blank" href="https://www.printondemandbusiness.com/blog/gelato-print-on-demand-a-comprehensive-review/">Gelato POD Review</a> — Perplexity Deep Research</p>
</li>
<li><p><a target="_blank" href="https://print2social.com/blog/print-on-demand-automation/">Print on Demand Automation</a> — SEObot</p>
</li>
<li><p><a target="_blank" href="https://print2social.com/blog/5-ai-content-curation-tools-for-social-media/">5 AI Content Curation Tools</a> — SEObot</p>
</li>
</ul>
<h1 id="heading-final-recommendation"><strong>Final Recommendation</strong></h1>
<p>If you’re considering AI for SEO content, my advice is to <strong>try different tools and see what works for your workflow</strong>. Start small, maybe with SEObot for quick content, Koala for longer posts, and Perplexity for research. And don’t forget to <strong>edit, personalize, and optimize</strong> the output. AI is powerful, but <strong>human touch still makes the difference</strong>. I always start with AI version and then <strong>optimise high-performing content manually</strong>.</p>
<p>If you’re curious, I’m planning to explore other tools like <strong>Scalenut and Outrank</strong> soon, so maybe part two of this article will be coming!</p>
<p><em>What’s your experience with AI writing tools? If you’ve used any of these (or others), drop me a comment or message. Would love to exchange notes!</em></p>
<p><em>\</em> affiliate link — if you make a purchase, I may receive a commission at no extra cost to you.*</p>
]]></content:encoded></item><item><title><![CDATA[How to Track Outbound Affiliate Link Clicks in Google Analytics 4]]></title><description><![CDATA[If you're using affiliate marketing to grow your online business, one of the key performance indicators you need to track is how many people are clicking on your affiliate links. Understanding which links are getting the most engagement allows you to...]]></description><link>https://blog.samiralibabic.com/how-to-track-outbound-affiliate-link-clicks-in-google-analytics-4</link><guid isPermaLink="true">https://blog.samiralibabic.com/how-to-track-outbound-affiliate-link-clicks-in-google-analytics-4</guid><category><![CDATA[Google Analytics]]></category><category><![CDATA[web analytics]]></category><category><![CDATA[Affiliate marketing]]></category><category><![CDATA[links]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Fri, 08 Nov 2024 15:12:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/qwtCeJ5cLYs/upload/22b71fdc1633f0d4b1ec7a04a00971c4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you're using affiliate marketing to grow your online business, one of the key performance indicators you need to track is how many people are clicking on your affiliate links. Understanding which links are getting the most engagement allows you to optimize your content and focus on what works best. Fortunately, Google Analytics 4 (GA4) provides robust tools for tracking outbound link clicks, including affiliate links, so you don’t need third-party tools to see what's driving traffic.</p>
<p>In this guide, I'll walk you through how to track outbound affiliate link clicks in GA4 using <strong>Enhanced Measurement</strong> and custom <strong>Explorations</strong>. Plus, I’ll share a simple alternative using <a target="_blank" href="https://www.linktracker.info/">LinkTracker</a>, a centralized platform for shortening and tracking links.</p>
<h3 id="heading-step-1-enable-enhanced-measurement-for-automatic-outbound-link-tracking"><strong>Step 1: Enable Enhanced Measurement for Automatic Outbound Link Tracking</strong></h3>
<p>Google Analytics 4 has an automatic feature called <strong>Enhanced Measurement</strong>, which makes tracking outbound links (like affiliate links) incredibly simple. With this setting enabled, you don’t need to manually tag links or configure Google Tag Manager.</p>
<p>Here’s how to ensure <strong>Enhanced Measurement</strong> is enabled:</p>
<ol>
<li><p><strong>Go to Admin</strong> in your GA4 account.</p>
</li>
<li><p>Under <strong>Property</strong>, select <strong>Data Streams</strong>, then click on your <strong>Web</strong> data stream.</p>
</li>
<li><p>In the <strong>Enhanced Measurement</strong> section, ensure that <strong>Outbound clicks</strong> is toggled on.</p>
</li>
</ol>
<p>This enables GA4 to automatically track clicks on links that lead users away from your site, including any affiliate links you share.</p>
<h3 id="heading-step-2-create-a-custom-exploration-to-view-outbound-link-clicks"><strong>Step 2: Create a Custom Exploration to View Outbound Link Clicks</strong></h3>
<p>While outbound clicks are tracked automatically, the default reports might not show you the exact URLs that users are clicking. To track the specific affiliate links that are being clicked, we’ll create a custom <strong>Exploration</strong>.</p>
<h4 id="heading-heres-how-to-set-up-your-exploration"><strong>Here’s how to set up your exploration:</strong></h4>
<ol>
<li><p><strong>Open Explore in GA4</strong>: On the left-hand menu, click Explore to start creating a custom report.</p>
</li>
<li><p><strong>Set Up Your Exploration:</strong>Click <strong>Blank</strong> to create a new exploration and give it a descriptive name, like “Outbound Link Clicks.”In the <strong>Variables</strong> section on the left, click <strong>+</strong> to add dimensions and metrics.Add <strong>Link URL</strong> (this will show which links were clicked) and <strong>Event Name</strong> as dimensions.Add <strong>Event Count</strong> as the metric (this will show how many times each link was clicked).</p>
</li>
<li><p><strong>Configure Rows and Values:</strong>In the <strong>Rows</strong> section, drag <strong>Link URL</strong> to populate the list of URLs clicked.In the <strong>Values</strong> section, drag <strong>Event Count</strong> to display the number of clicks for each link.</p>
</li>
<li><p><strong>Apply Filters:</strong>Under <strong>Filters</strong>, set a filter so that <strong>Event Name</strong> exactly matches "<strong>click</strong>". This ensures you’re only tracking outbound link clicks and not other types of engagement.</p>
</li>
<li><p><strong>Run the Exploration:</strong>Once configured, refresh the exploration to view the data (if it does not update automatically). You’ll now see a list of all the external links (affiliate links) clicked and how many times each was clicked.</p>
</li>
</ol>
<p>This report will give you the specific data on which outbound affiliate links are receiving the most attention from your users.</p>
<h3 id="heading-step-3-access-and-monitor-your-report"><strong>Step 3: Access and Monitor Your Report</strong></h3>
<p>Once you’ve set up this custom exploration, you can easily access it from the <strong>Explore</strong> tab in GA4 anytime you need to review performance. This gives you an ongoing way to track outbound clicks without needing to configure new reports every time.</p>
<p>To test the setup or verify real-time clicks, you can also use <strong>DebugView</strong> in GA4:</p>
<ul>
<li><p>Navigate to <strong>Admin &gt; DebugView</strong>.</p>
</li>
<li><p>Click on outbound links on your site, and watch the click events come through in real time with details like the <strong>link_url</strong> parameter.</p>
</li>
</ul>
<h3 id="heading-bonus-use-event-based-goals-for-affiliate-clicks"><strong>Bonus: Use Event-Based Goals for Affiliate Clicks</strong></h3>
<p>If clicks on certain affiliate links are critical to your business goals, you can set up event-based conversions in GA4. For example:</p>
<ol>
<li><p><strong>Create a custom event</strong> for clicks on your most important affiliate links using the <strong>link_url</strong> parameter.</p>
</li>
<li><p>Mark this event as a <strong>conversion</strong> in GA4, so you can track it as part of your conversion goals.</p>
</li>
</ol>
<p>This will help you track how many clicks on specific affiliate links contribute to your overall marketing objectives.</p>
<h3 id="heading-an-easier-alternative-track-affiliate-links-with-linktracker"><strong>An Easier Alternative: Track Affiliate Links with LinkTracker</strong></h3>
<p>If you’d rather not configure Google Analytics, or if you’re using a different web analytics platform, <a target="_blank" href="https://www.linktracker.info/">LinkTracker</a> offers a hassle-free solution. With <a target="_blank" href="https://www.linktracker.info/">LinkTracker</a> (currently in beta), you can <strong>shorten</strong> your affiliate links and <strong>track their performance</strong> across multiple platforms in one centralized dashboard. This tool allows you to monitor clicks, performance data, and easily manage all your links without needing to log into multiple systems or configure advanced analytics. It’s a simple and effective way to keep track of your affiliate marketing efforts, especially for those who prefer an all-in-one tool.</p>
<h3 id="heading-final-thoughts"><strong>Final Thoughts</strong></h3>
<p>Tracking outbound affiliate link clicks is crucial for optimizing your affiliate marketing strategy. Whether you choose to use <strong>Google Analytics 4</strong> or a simpler tool like <strong>LinkTracker</strong>, gaining visibility into how users engage with your affiliate links will help you focus on what works.</p>
<p>With the powerful tools offered by GA4, including <strong>Enhanced Measurement</strong> and <strong>Explorations</strong>, you can track and analyze affiliate link performance without needing any extra plugins or third-party software. For those who prefer a more streamlined approach, <a target="_blank" href="https://www.linktracker.info/">LinkTracker</a> provides an easy-to-use solution for tracking affiliate links across multiple platforms.</p>
]]></content:encoded></item><item><title><![CDATA[Free and Open-Source alternatives to Bitly and Co.]]></title><description><![CDATA[In my Introduction to Link Shortening I gave an overview of the most popular solutions for link shortening and management. This time I want to introduce you to some of the alternatives to popular services like Bitly, which are free, Open-Source and c...]]></description><link>https://blog.samiralibabic.com/free-and-open-source-alternatives-to-bitly-and-co</link><guid isPermaLink="true">https://blog.samiralibabic.com/free-and-open-source-alternatives-to-bitly-and-co</guid><category><![CDATA[Url Shortener]]></category><category><![CDATA[linkshortener]]></category><category><![CDATA[Link Shortener]]></category><category><![CDATA[URL shortening]]></category><category><![CDATA[analytics]]></category><category><![CDATA[Digital Marketing ]]></category><dc:creator><![CDATA[Samir Alibabic]]></dc:creator><pubDate>Sun, 31 Mar 2024 10:19:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/2m71l9fA6mg/upload/6d4d3dc927d30f2f476c48895d10ee25.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my <a target="_blank" href="https://hashnode.com/post/clmdxqalu00010al5gigeho6b">Introduction to Link Shortening</a> I gave an overview of the most popular solutions for link shortening and management. This time I want to introduce you to some of the alternatives to popular services like Bitly, which are free, Open-Source and can be hosted on your own servers.</p>
<h1 id="heading-polr">Polr</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711225630682/d6bdd372-e049-4304-bf7c-86f5303a4162.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://polrproject.org">Polr</a> is a PHP-based, open-source link shortener, with an API access and basic analytics. While its user interface may have charmed users back in 2013 at the project's inception, it now falls short of contemporary design standards.</p>
<p>On the analytics front, this link shortener furnishes insights into visits, referrers, and the geographical distribution of visitors. It has a demo version online, which you can test, before you decide to install it on your server.</p>
<p>I decided to give it a whirl myself, but after poking around in the demo, I didn't even bother with the hassle of installing it on my own machine.</p>
<p>However, it's worth noting that Polr isn't packaged within a container, necessitating manual setup. Nonetheless, for those willing to undertake the effort and requiring only basic analytics, the project is readily available on <a target="_blank" href="https://github.com/cydrobolt/polr">GitHub</a>, where it garners a commendable 4.9k stars.</p>
<h1 id="heading-yourls">YOURLS</h1>
<p>Originating in 2009, this link shortener stands as one of the oldest in the field. Developed in PHP, it boasts a robust ecosystem comprising over 200 plugins designed to expand its functionalities.</p>
<p>Despite my attempts to deploy it within various containerized environments - via <a target="_blank" href="https://github.com/YOURLS/images">Docker image</a> on a <a target="_blank" href="https://lima-vm.io">Lima</a> machine, both with and without <code>docker-compose</code> and later within a Minikube environment using <a target="_blank" href="https://github.com/YOURLS/charts">Helm</a> - I encountered persistent setbacks. Whether due to my own missteps or shortcomings in the Docker image's accuracy or currency, I found myself unable to achieve a successful deployment.</p>
<p>Compounding this challenge, as a macOS Sonoma user, I faced the additional hurdle of PHP support removal. To circumvent this limitation, I resorted to manual installation. Employing the familiar <code>brew install php</code> command, which I rely on for most software installations, I obtained the necessary instructions to integrate PHP with Apache:</p>
<pre><code class="lang-plaintext">To enable PHP in Apache add the following to httpd.conf and restart Apache:
    LoadModule php_module /usr/local/opt/php/lib/httpd/modules/libphp.so

    &lt;FilesMatch \.php$&gt;
        SetHandler application/x-httpd-php
    &lt;/FilesMatch&gt;

Finally, check DirectoryIndex includes index.php
    DirectoryIndex index.php index.html
</code></pre>
<p>However, upon executing the instructions, macOS Gatekeeper intervened, flagging the module as unsigned. While I won't delve into the intricacies here, you can refer to a comprehensive guide provided in a <a target="_blank" href="https://medium.com/@nadine.fisch/add-php-to-apache-on-macos-12-e3bb43469195">Medium article</a> for detailed instructions. Feeling overwhelmed by the complexity of the process, I made the decision to pivot and explore alternative solutions.</p>
<p>Nevertheless, the project continues to receive active maintenance from its creator, <a target="_blank" href="https://twitter.com/ozh">Ozh Richard</a>, and boasts a thriving community of contributors. Its source code is readily available on <a target="_blank" href="https://github.com/YOURLS/YOURLS">GitHub</a>, where it has garnered an impressive 10k stars, indicative of its widespread adoption and popularity within the developer community.</p>
<h1 id="heading-kutt">Kutt</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699187695881/1f5bd1f9-270f-4b6b-afff-8861b1841d7c.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://kutt.it">Kutt</a> represents a sleek, contemporary option, delivering a robust user experience alongside an array of appealing features. Among its highlights are customizable short URLs, comprehensive click tracking, and detailed analytics.</p>
<p>Under the hood, Kutt's tech-stack includes Node.js with Express, Passport for authentication, React with Next.js for frontend rendering, Easy Peasy for state management, styled-components for CSS styling, and Recharts for chart visualization. Data storage is handled by PostgreSQL, complemented by Redis for caching, while deployment is streamlined through Docker.</p>
<p>It boasts browser extensions, a handy CLI, and versatile clients and SDKs for almost any platform you can think of. Plus, it plays nice with ShareX, the ultimate screen capture and file-sharing tool.</p>
<p>However, as of writing this, the hosted version at <a target="_blank" href="http://kutt.it">kutt.it</a> seems to have taken a hiatus. Moreover, its development pace has slowed down, and its tech-stack isn't exactly keeping up with the times. Attempting a manual installation via npm was a headache, thanks to pesky dependency conflicts, leaving Docker as the only viable option.</p>
<p>Even after firing it up in Docker, creating an account proved to be an exercise in futility, with the UI offering nothing more than a vague <code>An error occurred</code> message. Call me lazy, but I wasn't keen on diving into container logs and hunting down root causes. Nevertheless, I'm sure with a bit more effort, one could get it up and running smoothly.</p>
<p>It's open-source under the MIT license, living its best life on <a target="_blank" href="https://github.com/thedevs-network/kutt">GitHub</a> with a respectable 8k stars. And hey, word on the street is that the maintainer's already cooking up a version 3.0!</p>
<h1 id="heading-shlink">Shlink</h1>
<p>Shlink is a self-hosted URL shortener offering a <a target="_blank" href="https://shlink.io/documentation/api-docs">REST API</a> and a <a target="_blank" href="https://shlink.io/documentation/command-line-interface/entry-point">CLI interface</a> for interaction. It also includes a Progressive Web Application (PWA) for interacting with multiple Shlink instances.</p>
<p>Similar to Kutt, <a target="_blank" href="https://shlink.io">Shlink</a> can be run via Docker with an internal database. The only requirement is an API key for <a target="_blank" href="https://dev.maxmind.com/geoip/geolite2-free-geolocation-data">GeoLite2</a>, used for geolocation data.</p>
<p>To test Shlink locally with ease, Docker users can simply execute the following command:</p>
<pre><code class="lang-plaintext">docker run \
    --name my_shlink \
    -p 8080:8080 \
    -e DEFAULT_DOMAIN=localhost \
    -e IS_HTTPS_ENABLED=false \
    -e GEOLITE_LICENSE_KEY=your_license_key \
    shlinkio/shlink:stable
</code></pre>
<p>This command initiates the Shlink server, encompassing both a REST API and a CLI. Subsequently, you can utilise the CLI to generate your API key:</p>
<pre><code class="lang-plaintext">docker exec -it my_shlink shlink api-key:generate
</code></pre>
<p>When experimenting with the CLI, if you create a link using the command <code>docker exec -it my_shlink shlink short-url:create</code>, the resulting short link may not include the default port <code>8080</code>. Therefore, you have to add it manually for the link to work (e.g. <code>http://localhost:8080</code>).</p>
<p>Additionally, the PWA, commonly used for accessing Shlink, can also be installed and operated via Docker by executing:</p>
<pre><code class="lang-shell">docker run \
    --name shlink-web-client \
    -p 8000:8080 \
    -e SHLINK_SERVER_URL=http://localhost:8080 \
    -e SHLINK_SERVER_API_KEY=your_generated_key \
    shlinkio/shlink-web-client
</code></pre>
<p>Upon visiting <code>http://localhost:8000</code> you will be greeted with a user-friendly Web App interface complete with analytics. What I find cool by this link shortener is the ability to manage multiple servers within a single interface. 😎</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711707546647/e847f860-e246-43cb-8c81-e3015eea2d59.png" alt class="image--center mx-auto" /></p>
<p>Besides the number of visits for each link, the provided analytics include operating systems, browsers, referrers, countries and cities. Data series can be shown for any period, from one day to a year or custom ranges.</p>
<p>Other interesting features include QR code generation, redirection rules based on devices, languages or query parameters, editing short URL's after creation and comparing statistics for up to five short links. And all this is just on the first glance at the Web App, and there is <a target="_blank" href="https://shlink.io/documentation/some-features/">more</a>. Very powerful!</p>
<p>All the kudos to the author and maintainer <a target="_blank" href="https://github.com/acelaya">Alejandro Celaya</a>, who has been building the project actively since 2016! 🤯</p>
<p>It's definitely underrated with 2.7k stars on GitHub, considering everything this link shortener offers. For those interested, you can check out the repository here: <a target="_blank" href="https://github.com/shlinkio/shlink">https://github.com/shlinkio/shlink</a> ⭐️</p>
<h1 id="heading-dubco">Dub.co</h1>
<p>For those in search of a contemporary solution for link shortening and analytics, <a target="_blank" href="https://dub.co">Dub.co</a> is worth considering. When you visit this hosted service, you'll immediately notice its modern and user-friendly design, which is typical of today's startup landscape.</p>
<p>Dub.co boasts a modern tech-stack, including Next.js, TailwindCSS, Prisma, NextAuth.js, and BoxyHQ for authentication, alongside Turborepo as a robust build system. It follows a cloud-first approach, utilizing various managed services:</p>
<ul>
<li><p>Clickhouse database, managed by <a target="_blank" href="https://www.tinybird.co">Tinybird</a> - used for time-series click data</p>
</li>
<li><p>Redis database, managed by <a target="_blank" href="https://upstash.com">Upstash</a> - for caching link metadata and serving redirects</p>
</li>
<li><p>MySQL database, managed by <a target="_blank" href="http://planetscale.com">PlanetScale</a> - for storing the actual user and link data</p>
</li>
</ul>
<p>Whether you choose to deploy it locally or integrate it into your existing infrastructure, you'll need to create accounts with the mentioned service providers. Furthermore, you can use <a target="_blank" href="https://postmarkapp.com">Postmark</a> for sending emails and <a target="_blank" href="https://unsplash.com">Unsplash</a> to customise your OpenGraph images.</p>
<p>There is no Docker image, but installation is smooth nonetheless, thanks to a nice looking documentation powered by <a target="_blank" href="https://mintlify.com">Mintlify</a>. The only issues I ran into, while setting it up locally, were missing <code>POSTMARK_API_KEY</code> (optional according to the documentation) and <code>@dub/ui</code> and <code>@dub/utils</code> modules not building properly. But it was nothing an average technical user can't solve.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711731880300/1dad0801-c588-4b7a-bcfe-abe379d6af4a.png" alt class="image--center mx-auto" /></p>
<p>Analytics provide visits, referrers, locations by country or city, devices, browsers, and operating systems. Essentially, it covers everything you might need to effectively track your campaign's performance. What I found the most impressive with Dub.co is its extensive options for link creation: UTM parameters, custom social media cards, link cloaking, password protection, expiration date, iOS, Android and Geo-based targeting. It's all there! 🤯</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711732277092/2dae4042-34f7-446f-a185-9ae2d9c6cf35.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://twitter.com/steventey">Steven Tey</a> has done an outstanding job with <a target="_blank" href="http://Dub.co">Dub.co</a>, and its success is evident from its recent launch on <a target="_blank" href="https://www.producthunt.com/posts/dub">Product Hunt</a>. Dub.co quickly rose to become 1# Product of the Day and 1# Product of the Week. With its continued momentum, it's well on its way to being a #1 Product of the Month. This significant recognition is sure to propel <a target="_blank" href="http://Dub.co">Dub.co</a> to even greater success in the future.</p>
<p>Dub.co is licensed under the AGPL-3.0 license and has garnered an impressive 15.7k stars on <a target="_blank" href="https://github.com/dubinc/dub">GitHub</a>. 👏</p>
<h1 id="heading-linktrackerinfo">linktracker.info</h1>
<p>This article would not be complete without me mentioning my own take on link shortening and analytics: <a target="_blank" href="https://www.linktracker.info">LinkTracker</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711878086926/a5100950-58f9-4fc5-a87b-b25183c17f36.png" alt class="image--center mx-auto" /></p>
<p>It features a simple interface to create and organise short links. Links can be categorised and tagged. Analytics are similar to what we have seen with other link shorteners: visits, browsers, devices, referrers and countries.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711878275645/901c39ce-6a85-48e2-ba4a-bf9ed280e510.png" alt class="image--center mx-auto" /></p>
<p>All shortened links are managed in a single table view, with statistics shown below. Currently, statistics are aggregated for all the links, except for visits, which are the most important statistic and therefore also shown for individual links.</p>
<p>Tech-stack is Next.js, TailwindCSS and AntDesign. Data is stored in a PostgreSQL database managed by Supabase, with Prisma serving as the ORM.</p>
<p>It's completely free to use and you can try it at <a target="_blank" href="http://linktracker.info/CRZFOg">https://linktracker.info</a>. 🙏</p>
<h1 id="heading-unshorten-just-in-case">Unshorten - just in case</h1>
<p>As with any link shortening service, it's essential to consider the potential risks associated with spam and security vulnerabilities. Open-source solutions often offer greater transparency and security assurances.</p>
<p>If you ever encounter a short link and have doubts about its legitimacy, you can utilise the free "unshorten" tool available at <a target="_blank" href="http://unshorten.me">unshorten.me</a>. This tool helps you expand shortened links, providing insight into their destination and assisting in verifying their authenticity.</p>
<h1 id="heading-awesome-list">Awesome list</h1>
<p>If you find this topic intriguing and wish to delve deeper, I highly recommend exploring the "awesome" <a target="_blank" href="https://github.com/738/awesome-url-shortener#self-hosting-opensource">list on GitHub</a> dedicated to link shortening and management tools.</p>
<p>Thank you for taking the time to read this far. I hope you found the information useful. Feel free to reach out if you have any further questions or feedback. ✌️</p>
]]></content:encoded></item></channel></rss>