Python for AI
How Python became the language of AI — the full ecosystem, scikit-learn in practice, calling AI APIs, building real apps, and your personal roadmap into ML.
Everything you’ve learned leads here
Functions, classes, NumPy, requests, async, packaging — all of it is the exact toolkit used to build real AI applications. This lesson shows you how the pieces connect.
"Python didn't become the language of AI by accident. Its readable syntax maps directly to mathematical ideas, and its ecosystem — NumPy through PyTorch — is the most complete scientific stack ever assembled."
— ShuraiWhy Python Dominates AI & Machine Learning
W = X @ W.T looks like a textbook.The AI / ML Stack — Visualised
Part 1 — Classical ML with scikit-learn
scikit-learn is your starting point for machine learning. Every algorithm follows the same fit / predict / score pattern — learn it once and it works for every model:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report
# 1. Load data
X, y = load_iris(return_X_y=True)
print(f"Dataset: {X.shape[0]} samples, {X.shape[1]} features")
# Dataset: 150 samples, 4 features
# 2. Split — keep 20% unseen for honest evaluation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# 3. Train
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
# 4. Evaluate
preds = model.predict(X_test)
print(f"Accuracy: {accuracy_score(y_test, preds):.1%}")
print(classification_report(y_test, preds, target_names=load_iris().target_names))
Accuracy: 96.7%
precision recall f1-score support
setosa 1.00 1.00 1.00 10
versicolor 1.00 0.91 0.95 11
virginica 0.90 1.00 0.95 9
# Same fit/predict API — just change the model class
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
for clf in [RandomForestClassifier(), LogisticRegression(), DecisionTreeClassifier()]:
clf.fit(X_train, y_train)
acc = accuracy_score(y_test, clf.predict(X_test))
print(f"{clf.__class__.__name__:28} {acc:.1%}")
# RandomForestClassifier 96.7%
# LogisticRegression 93.3%
# DecisionTreeClassifier 90.0%
Part 2 — Calling AI APIs
You don’t need to train your own model to build AI-powered apps. The fastest path is calling a hosted model API — GPT-4, Claude, or Gemini — via HTTP. Your skills from the requests lesson apply directly:
from openai import OpenAI
client = OpenAI() # reads OPENAI_API_KEY from environment
def ask_ai(question):
response = client.chat.completions.create(
model = "gpt-4o-mini",
messages = [{"role": "user", "content": question}],
)
return response.choices[0].message.content
print(ask_ai("Explain recursion in one sentence."))
# Recursion is a function calling itself with a simpler version of the
# problem until it reaches a base case that stops the calls.
import anthropic
client = anthropic.Anthropic() # reads ANTHROPIC_API_KEY from env
message = client.messages.create(
model = "claude-3-5-haiku-20241022",
max_tokens = 256,
messages = [{"role": "user", "content": "What is a neural network?"}]
)
print(message.content[0].text)
Part 3 — Build a Practical AI App
Let’s put everything together: a customer feedback classifier that uses an AI API to automatically label reviews as positive, negative, or neutral — and saves results to a CSV:
import anthropic, csv, json, time, logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s")
logger = logging.getLogger(__name__)
client = anthropic.Anthropic()
REVIEWS = [
"Absolutely love this product! Best purchase I've made.",
"It broke after two days. Very disappointed.",
"Decent quality, arrived on time. Nothing special.",
"Customer service was brilliant, resolved my issue instantly!",
"The colour was different from the photo. A bit misleading.",
]
def classify_review(review):
"""Ask Claude to classify a review. Returns dict with label + reason."""
prompt = f"""Classify this customer review as POSITIVE, NEGATIVE, or NEUTRAL.
Respond ONLY with valid JSON in this exact format:
{{"label": "POSITIVE", "confidence": 0.95, "reason": "one sentence"}}
Review: {review}"""
msg = client.messages.create(
model="claude-3-5-haiku-20241022", max_tokens=128,
messages=[{"role": "user", "content": prompt}]
)
return json.loads(msg.content[0].text)
results = []
for i, review in enumerate(REVIEWS, 1):
logger.info(f"Processing review {i}/{len(REVIEWS)}")
result = classify_review(review)
results.append({"review": review[:50], **result})
time.sleep(0.5) # stay within rate limits
# Save to CSV (uses your file I/O skills)
with open("classified.csv", "w", newline="") as f:
w = csv.DictWriter(f, fieldnames=["review", "label", "confidence", "reason"])
w.writeheader()
w.writerows(results)
print("
=== Results ===")
for r in results:
icon = {"POSITIVE": "✅", "NEGATIVE": "❌", "NEUTRAL": "➖"}[r["label"]]
print(f"{icon} {r['label']:8} ({r['confidence']:.0%}) {r['review']}")
=== Results ===
✅ POSITIVE (95%) Absolutely love this product! Best purchase...
❌ NEGATIVE (98%) It broke after two days. Very disappointed.
➖ NEUTRAL (80%) Decent quality, arrived on time. Nothing spe...
✅ POSITIVE (97%) Customer service was brilliant, resolved my...
❌ NEGATIVE (72%) The colour was different from the photo. A bi...
Store them in environment variables: export OPENAI_API_KEY="sk-..." in your terminal, or in a .env file with pip install python-dotenv. Then load with from dotenv import load_dotenv; load_dotenv(). Add .env to .gitignore so you never accidentally commit keys to GitHub.
Part 4 — NumPy: How AI Really Works Under the Hood
Every neural network — whether GPT-4, AlphaFold, or a simple image classifier — is fundamentally matrix multiplication. NumPy lets you see this directly:
import numpy as np
# One forward pass through a single neural network layer
# input: batch of 4 samples, each with 3 features
X = np.array([[0.5, 0.2, 0.8],
[0.1, 0.9, 0.3],
[0.7, 0.4, 0.6],
[0.3, 0.7, 0.2]]) # shape: (4, 3)
# weights: learned parameters of the layer (3 inputs → 2 outputs)
W = np.random.randn(3, 2) * 0.1 # shape: (3, 2)
b = np.zeros(2) # bias: shape (2,)
# Linear transformation: output = X @ W + b
# This ONE line is the core operation of every neural network
z = X @ W + b # shape: (4, 2)
# Apply activation function (ReLU: max(0, x))
def relu(x): return np.maximum(0, x)
output = relu(z)
print("Layer output shape:", output.shape) # (4, 2)
print(output)
Your AI Learning Roadmap
How Your Python Skills Map to AI
| What you learned | How it’s used in AI |
|---|---|
| NumPy arrays & broadcasting | Tensors in PyTorch/TensorFlow are NumPy arrays on steroids. Every weight matrix is an ndarray. |
| Classes & OOP | PyTorch models are classes inheriting from nn.Module. Datasets are classes with __len__ and __getitem__. |
| requests & APIs | Calling OpenAI, Anthropic, Hugging Face Inference API — all HTTP requests with JSON payloads. |
| Generators & iterators | DataLoaders in PyTorch are generators — they yield batches lazily to keep memory usage low. |
| Decorators | @torch.no_grad() disables gradient computation. @app.route in Flask serves model predictions. |
| async / await | Streaming AI responses from APIs (token by token) uses async generators. FastAPI is fully async. |
| pandas & matplotlib | Exploratory data analysis, cleaning training data, plotting training curves and confusion matrices. |
| pytest & unit tests | Testing data pipelines, model output shapes, preprocessing functions — production ML needs tests. |
You already know Python. You’re already AI-ready.
The next step isn’t learning a new language — it’s picking an AI project that interests you and starting it. All the foundations are in place.
🧠 Quiz — Q1
What is the universal scikit-learn pattern used by every model?
🧠 Quiz — Q2
Why is it dangerous to hardcode API keys directly in Python source files?
🧠 Quiz — Q3
A PyTorch DataLoader yields mini-batches of training data one at a time. Which Python concept does this use?
🧠 Quiz — Q4
You have learned pandas, NumPy, requests, async, classes, and generators. What does this mean for your AI journey?