function buildPersonalWebsite() {
  const developer = new Developer('Enes Arıkan');
  const skills = ['React', 'TypeScript', 'Next.js', 'TailwindCSS'];
  
  developer.setExperience([
    { role: 'QA Engineer', company: 'Insider', years: 2 },
    { role: 'Software Tester', company: 'Various', years: 3 }
  ]);
  
  const projects = developer.createProjects([
    'E-commerce Platform',
    'Task Management App', 
    'Weather Dashboard',
    'Portfolio Website'
  ]);
  
  return {
    portfolio: developer.showcase(projects),
    blog: developer.shareKnowledge(),
    contact: developer.getContactInfo(),
    playground: developer.createInteractiveGames()
  };
}

class CareerJourney {
  constructor() {
    this.levels = [];
    this.currentLevel = 0;
    this.experience = 0;
  }
  
  addExperience(role, company, duration) {
    this.levels.push({
      title: role,
      company: company,
      duration: duration,
      skills: this.getSkillsForRole(role)
    });
    this.levelUp();
  }
  
  levelUp() {
    this.currentLevel++;
    this.experience += 100;
    console.log(`Level up! Now at level ${this.currentLevel}`);
  }
  
  getSkillsForRole(role) {
    const skillMap = {
      'QA Engineer': ['Testing', 'Automation', 'Bug Tracking'],
      'Frontend Developer': ['React', 'JavaScript', 'CSS'],
      'Full Stack Developer': ['Node.js', 'Databases', 'APIs']
    };
    return skillMap[role] || [];
  }
}

// Initialize the journey
const career = new CareerJourney();
career.addExperience('QA Engineer', 'Insider', '2 years');

// Bug hunting mini-game logic
function createBugHunt() {
  const bugs = [
    { type: 'NullPointerException', severity: 'high' },
    { type: 'RaceCondition', severity: 'critical' },
    { type: 'MemoryLeak', severity: 'medium' },
    { type: 'InfiniteLoop', severity: 'high' }
  ];
  
  return {
    findBugs: () => bugs.filter(bug => bug.severity === 'high'),
    fixBug: (bugId) => console.log(`Fixed bug: ${bugId}`),
    score: bugs.length * 10
  };
}

// Skill tree implementation
const skillTree = {
  frontend: {
    react: { level: 5, unlocked: true },
    typescript: { level: 4, unlocked: true },
    nextjs: { level: 4, unlocked: true }
  },
  testing: {
    automation: { level: 5, unlocked: true },
    manual: { level: 5, unlocked: true },
    performance: { level: 3, unlocked: true }
  },
  tools: {
    git: { level: 4, unlocked: true },
    docker: { level: 2, unlocked: false },
    kubernetes: { level: 1, unlocked: false }
  }
};

export default buildPersonalWebsite;
Robot representing artificial intelligence
January 30, 2025·AI

LLM Hallucinations: What Every Developer Needs to Know

Language models confidently make things up. Understanding why this happens — and how to build around it — is now a core engineering skill.

AILLMReliabilityDeveloper Tools

At some point, every developer who integrates an LLM into a real product hits the same wall: the model returns something confident, plausible, and completely wrong. A function that doesn't exist. A date that never happened. A statistic that sounds real but wasn't in any training data.

This is called hallucination, and understanding it isn't just academic — it determines how you design, test, and communicate the limits of AI features.

Why Hallucinations Happen

LLMs don't retrieve facts from a database. They generate text by predicting what comes next based on patterns learned from training data. This means they don't have a way to say "I don't know this" and look it up — they pattern-match to the most plausible-sounding continuation rather than admitting uncertainty.

Confidence is a feature of the generation mechanism, not a reliable signal of correctness. The model sounds certain because it's always generating the most probable next token, regardless of whether it "knows" the answer.

Types of Hallucinations in Practice

Factual fabrication: The model invents facts — citations, statistics, dates, names — that seem plausible but are wrong. This is the most dangerous type for applications that present AI output as authoritative.

Code hallucinations: The model writes code that references functions, methods, or libraries that don't exist. The syntax is correct; the API is invented. This is extremely common with less-documented frameworks or niche libraries.

Instruction drift: In long conversations or complex prompts, the model loses track of constraints it was given. It "forgets" it was told not to use a certain format, or starts answering as if a previous instruction no longer applies.

Confident extrapolation: When the model knows 90% of the information needed, it fills in the remaining 10% with plausible-seeming content rather than flagging the gap. This is particularly insidious because the output is mostly right.

Designing Around Hallucinations

The practical question isn't "how do we eliminate hallucinations" — current models can't fully eliminate them. The question is: how do we build systems where hallucinations cause the least harm?

Retrieval-Augmented Generation (RAG): Instead of asking the model to generate from memory, you retrieve relevant documents first and ask the model to answer based only on those documents. Hallucination rates drop significantly for factual queries.

Constrain the output space: The more open-ended the prompt, the more room for hallucination. "Summarize only what is explicitly stated in the review. Do not infer or extrapolate" yields more honest outputs than prompts that implicitly reward confident answers.

Build in uncertainty signals: Design your prompts to encourage the model to express uncertainty. "If you're not certain, say so and explain what you're unsure about" produces more honest outputs.

Validate structured outputs: If your feature depends on structured output (JSON, lists, specific formats), validate programmatically before using it. Don't assume that because the format looks right, the content is right.

Testing for Hallucinations

From a QA perspective, hallucination testing means building a test set that specifically targets the model's knowledge boundaries:

  • Questions about recent events (post-training cutoff)
  • Questions about niche topics where training data is sparse
  • Questions with partially correct premises the model might accept rather than correct
  • Long conversations designed to erode instruction adherence

Run these test cases after any model update, system prompt change, or context window modification. Hallucination behavior is sensitive to all of these.

Communicating Limits to Users

Applications that present AI output without context create higher risk than those that contextualize it. "Here's what the AI found — always verify important information" actively shapes how users engage with output and reduces harm when the model is wrong.

This isn't just a legal disclaimer. It's a UX decision and a QA concern rolled into one.

The Bottom Line

Hallucinations aren't a bug that will be patched in the next release. They're a fundamental characteristic of how current LLMs work. The best developers building with AI treat them as a known constraint to design around — not a flaw to be embarrassed about or a reason to avoid the technology.

Know where your model is most likely to hallucinate. Design your system to catch or minimize those cases. Be honest with users about what AI can and can't reliably do. That's the job.


This is part of a series on working with AI in real products. Read next: Testing AI Features.