Neural Networks

The foundational architecture that powers modern AI.

What is a Neural Network?

A neural network is a computational model inspired by the brain. It consists of layers of interconnected nodes (neurons) that learn to transform inputs into outputs through training.

Core Components

The building blocks of neural networks.

Neurons

Basic units that compute weighted sums of inputs and apply activation functions.

Layers

Groups of neurons: input layer, hidden layers, and output layer.

Weights & Biases

Learnable parameters that determine how inputs are transformed.

Activation Functions

Non-linear functions that allow networks to learn complex patterns.

Types of Networks

Feedforward (MLP)

Information flows one direction. Good for tabular data.

Convolutional (CNN)

Specialized for images and spatial data.

Recurrent (RNN)

Processes sequences with memory of past inputs.

Transformer

Attention-based architecture powering modern LLMs.

🧠

Neural Network Visualizer

Build and explore network architectures

Network Architecture

4 layers, 13 neurons

Input Layer:3
Hidden Layer 1:4
Hidden Layer 2:4
Output Layer:2
0.140.320.620.000.000.000.000.000.000.000.000.000.00Input LayerHidden Layer 1Hidden Layer 2Output Layer

Each neuron computes a weighted sum of inputs and applies an activation function (sigmoid). Connection thickness represents weight magnitude; color indicates positive (green) or negative (red) weights.

Key Takeaways

  • 1Neural networks learn by adjusting weights through training
  • 2Depth (more layers) enables learning hierarchical features
  • 3Different architectures suit different data types
  • 4Modern LLMs are massive transformer networks