<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AGI on Arshad Siddiqui</title><link>https://arshadhs.github.io/tags/agi/</link><description>Recent content in AGI on Arshad Siddiqui</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 15 Dec 2025 10:55:52 +0100</lastBuildDate><atom:link href="https://arshadhs.github.io/tags/agi/index.xml" rel="self" type="application/rss+xml"/><item><title>AI Stages: ANI, AGI, ASI</title><link>https://arshadhs.github.io/docs/ai/foundation/ai-stages/</link><pubDate>Thu, 04 Jul 2024 10:55:52 +0100</pubDate><guid>https://arshadhs.github.io/docs/ai/foundation/ai-stages/</guid><description>&lt;h1 id="ai-development-stages-ani--agi--asi">
 AI Development Stages: ANI → AGI → ASI
 
 &lt;a class="anchor" href="#ai-development-stages-ani--agi--asi">#&lt;/a>
 
&lt;/h1>
&lt;p>Artificial Intelligence is often described in &lt;strong>three stages&lt;/strong>, based on capability and scope:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>ANI:&lt;/strong> Task-specific intelligence (today’s AI)&lt;/li>
&lt;li>&lt;strong>AGI:&lt;/strong> Human-level general intelligence (future goal)&lt;/li>
&lt;li>&lt;strong>ASI:&lt;/strong> Beyond human intelligence (theoretical)&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;img src="https://arshadhs.github.io/images/ai/ai_stages.png" alt="AI Stages" />&lt;/p>
&lt;hr>
&lt;h2 id="ani--artificial-narrow-intelligence">
 ANI — Artificial Narrow Intelligence
 
 &lt;a class="anchor" href="#ani--artificial-narrow-intelligence">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>also called &lt;strong>Weak AI&lt;/strong>&lt;/li>
&lt;li>designed to perform &lt;strong>one specific task&lt;/strong>&lt;/li>
&lt;li>Operates within a &lt;strong>predefined environment&lt;/strong>&lt;/li>
&lt;li>Cannot generalise beyond its training&lt;/li>
&lt;li>&lt;strong>Most AI systems today are ANI&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>examples&lt;/strong>&lt;/p></description></item><item><title>AI Agents</title><link>https://arshadhs.github.io/docs/ai/genai/ai-agents/</link><pubDate>Mon, 15 Dec 2025 10:55:52 +0100</pubDate><guid>https://arshadhs.github.io/docs/ai/genai/ai-agents/</guid><description>&lt;h1 id="ai-agents">
 AI Agents
 
 &lt;a class="anchor" href="#ai-agents">#&lt;/a>
 
&lt;/h1>
&lt;p>Also referred to as Agentic AI.&lt;/p>
&lt;p>AI agents are &lt;strong>intelligent systems&lt;/strong> that can &lt;strong>plan, make decisions, and take actions&lt;/strong> to achieve goals with &lt;strong>minimal human intervention&lt;/strong>.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A common use case is &lt;strong>task automation&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>for example booking travel based on a user’s request.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>AI agents typically build on &lt;strong>Generative AI&lt;/strong> and use &lt;strong>Large Language Models (LLMs)&lt;/strong> as the reasoning core.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Agents often interact with tools (APIs, databases, calendars) to complete multi-step workflows.&lt;/p></description></item><item><title>Transformer</title><link>https://arshadhs.github.io/docs/ai/deep-learning/090-transformer/</link><pubDate>Mon, 15 Dec 2025 10:55:52 +0100</pubDate><guid>https://arshadhs.github.io/docs/ai/deep-learning/090-transformer/</guid><description>&lt;h1 id="transformer">
 Transformer
 
 &lt;a class="anchor" href="#transformer">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>
&lt;p>is an architecture of neural networks&lt;/p>
&lt;/li>
&lt;li>
&lt;p>based on the multi-head attention mechanism&lt;/p>
&lt;/li>
&lt;li>
&lt;p>text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table&lt;/p>
&lt;/li>
&lt;li>
&lt;p>takes a text sequence as input and produces another text sequence as output&lt;/p>
&lt;/li>
&lt;li>
&lt;p>foundation for modern &lt;strong>&lt;a href="https://arshadhs.github.io/docs/ai/genai/llm/">Large Language Models (LLMs)&lt;/a>&lt;/strong> like ChatGPT and Gemini&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Transformer architecture&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Model, Positionwise Feed-Forward Networks, Residual Connection and Layer Normalization&lt;/p></description></item></channel></rss>