- What Is Big O?
- Why Use Big O?
- Quick Recap
- Example Problem: Add Numbers from 1 to N
- Which One Is “Better”?
- Smarter Way to Compare: Big O Notation
- So:
- Final Takeaway
- Why Counting Seconds Isn’t Enough — Measure Code with Operations Instead
- Think of Code Like Tasks
- What Counts As an “Operation”?
- Why Not Count Every Single Operation?
- Why This Really Matters
- Key Points
- Textbook Definition Of Big O
- More Examples
- Visualizing Big O
- Key Points
- So
- The 3 Rules of Simplifying Big O
- Common Big O Complexities
- Key Points on How to Simplify Big O
- What Is Space Complexity?
- What Takes Up Space in Code?
- Example: Just a Counter
- Example: Make a New Array
- Example: Modify In-Place
- Important
- Space Complexities
- Time vs Space Trade-Off
- Key Points
- Let’s Talk About Logs — And Why They Make Code Fast
- Why Logs Are Awesome in Big O
- Where Logs Show Up in Coding
- Final Thought
Today I started learning DSA from scratch. In this post, I am sharing what I learned about my first topic in DSA, which is Big O Notation.
What Is Big O?
Big O tells us how fast or slow our code gets as the input gets bigger.
It doesn’t measure exact time — it shows how your code “scales.”
Example: Constant Time — O(1)
function getFirstItem(arr) {
return arr[0]; // always takes same time, 1 step
}
Even if the array has 10 items or 10 million, it still does just one operation.
So this is O(1) — constant time.
Example: Linear Time — O(n)
function printAllNames(names) {
for (let i = 0; i < names.length; i++) {
console.log(names[i]);
}
}
If names
has 5 people, it runs 5 times.
If it has 1000, it runs 1000 times.
So this is O(n) — time grows with input.
Example: Quadratic Time — O(n²)
function printAllPairs(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
console.log(arr[i], arr[j]);
}
}
}
If arr
has 10 items, this loop runs 100 times (10 × 10).
So this is O(n²) — it gets slow fast when data grows.
Why Use Big O?
1. Choose Better Code
You can reverse a string in 2 ways:
Way 1 — Loop (O(n)):
function reverseStr1(str) {
let reversed = '';
for (let i = str.length - 1; i >= 0; i--) {
reversed += str[i];
}
return reversed;
}
Way 2 — Built-in (also O(n), but cleaner):
function reverseStr2(str) {
return str.split('').reverse().join('');
}
Both are O(n) (they touch every letter once),
but the second one is shorter and easier to read.
2. Fix Slow Code
If your code is slow, Big O helps find where it’s slow.
3. Interviews Love Big O
Interviewers ask things like:
- “What’s the Big O of your solution?”
- “Can you make it faster?”
If you answer clearly:
“This loop is O(n), but I can improve it to O(log n) by using binary search.”
💥 Boom — you sound like a pro!
Quick Recap
- Big O tells us how code behaves as input grows.
- Helps compare and fix code with clear math logic.
- Makes you better at writing, optimizing, and explaining code.
- Always know if your solution is O(1), O(n), O(n²), etc.
When you’re learning to code, the first goal is to make your code work.
But the next goal is even more important: make sure it runs efficiently — especially when the input gets huge.
So how do we know which version of our code is better — without using a timer or stopwatch?
Example Problem: Add Numbers from 1 to N
Let’s say you’re asked to add up all numbers from 1 to a number n
.
✅ Method 1: Using a loop (O(n))
function computeSum(n) {
let total = 0;
for (let i = 1; i <= n; i++) {
total += i;
}
return total;
}
This counts and adds each number one by one. It works fine, but it gets slower as n
gets bigger.
Method 2: Using a math formula (O(1))
function computeSumFast(n) {
return (n * (n + 1)) / 2;
}
This uses a clever math trick that instantly gives the same result — no loop needed, no matter how big n
is.
Which One Is “Better”?
Both give the same result. But:
- One loops through every number (slow for big numbers)
- One gives the answer instantly (always fast)
So how do we compare them fairly?
You could use a timer, but timers depend on your computer, background apps, etc. Results might not always be accurate.
Smarter Way to Compare: Big O Notation
Big O is like a universal scale for measuring performance.
Instead of “this one took 2 seconds,” you say:
- The loop method is O(n) — time grows with the size of input
- The formula method is O(1) — runs instantly, always the same
This way, you can compare how code behaves, not just how fast it runs today.
So:
computeSum(n)
— O(n) — Slows down as input gets bigger.
computeSumFast(n)
— O(1) — Always fast, no matter the input.
That’s why understanding Big O is super useful — it helps you write better code that scales.
Final Takeaway
When you write code, don’t just ask:
“Does it work?”
Also ask:
“Will it work fast when the input gets huge?”
That’s the real power of Big O — helping you write smart, scalable code from day one.
Why Counting Seconds Isn’t Enough — Measure Code with Operations Instead
When comparing two pieces of code, you might think:
“I’ll just run both and see which one is faster.”
But here’s the problem:
Running the same code on different machines — or even the same machine at different times — gives different results. Timers lie.
So how do we know which code is truly better?
👉 Don’t count seconds. Count steps (operations).
Think of Code Like Tasks
Let’s say you’re organizing two events:
- Event A: 5 quick tasks
- Event B: 50 repetitive tasks
Time might vary depending on your energy, but the number of tasks never changes.
Code works the same way. Instead of worrying about exact runtime, we count how much work the computer has to do.
What Counts As an “Operation”?
Operations are the tiny actions your computer performs, like:
+
→ addition*
→ multiplication=
→ assignment<=
→ comparison
Example Problem: Total Price from 1 to count
We want to add numbers from 1 to count
.
Let’s say count = 4
, then total is: 1 + 2 + 3 + 4 = 10
Method 1: Loop
function calculateTotal(count) {
let sum = 0;
for (let item = 1; item <= count; item++) {
sum += item;
}
return sum;
}
This runs the loop count
times.
Each time it does about 4 small operations (add, assign, compare, increment).
If count = 1000
, then:
~4000 operations
So we say:
🕒 O(n) — number of operations grows with input
Method 2: Formula
function calculateTotalSmart(count) {
return (count * (count + 1)) / 2;
}
Only 3 operations:
count + 1
*
/
Even if count = 1,000,000
, it’s still just 3 steps.
So we say:
⚡ O(1) — operations are constant
Why Not Count Every Single Operation?
Technically, you could say:
- “This function does
4n + 2
steps.” - “That one does
5n + 1
steps.”
But in Big O, we don’t care about small details.
We care about how it scales:
4n + 2
and5n + 1
are both still O(n)- What matters is: “Does it grow with input?”
Why This Really Matters
Imagine running this on 1 billion items:
- 🐢 Loop method: Needs billions of operations
- ⚡ Formula method: Still just 3 operations
So now, even if you don’t use a stopwatch,
👉 you know which code is faster just by looking at it.
Key Points
✅ Don’t just time your code — analyze it
✅ Count operations like a checklist: +
, =
, <
, etc.
✅ Focus on how operations grow as input grows
✅ That’s the heart of Big O Notation
You’ve heard terms like “function performance” or “runtime trends” — but now it’s time to name what we’re really talking about:
👉 Big O Notation
Big O helps to describe how code grows in terms of the number of steps it takes as the input gets bigger.
- ❌ It’s not about exact speed
- ❌ It doesn’t care what machine you’re using
- ✅ It’s just about how much work your code does as input increases
Textbook Definition Of Big O
You might read something like:
“An algorithm is O(f(n)) if its operations are eventually less than some constant multiple of f(n)…”
Hard, right?
In plain words:
👉 Big O tells us how slow a function can get as inputs grow — like a “worst-case growth rate.”
More Examples
Two Loops in a Row
function countForwardAndBackward(n) {
for (let i = 0; i < n; i++) {
console.log(i);
}
for (let j = n - 1; j >= 0; j--) {
console.log(j);
}
}
- Each loop is O(n)
- Together? Still just O(n)
👉 Because Big O ignores constants like 2
Nested Loops (O(n²))
function showAllPairs(n) {
for (let i = 0; i < n; i++) {
for (let j = 0; j < n; j++) {
console.log(i, j);
}
}
}
- For each
i
, we loop through allj
- So it grows super fast: 10 → 100 steps, 100 → 10,000!
- This is O(n²) — quadratic time
Visualizing Big O
- O(1) → Flat line. No matter what input, speed stays the same
- O(n) → Straight line upward
- O(n²) → Starts slow, then shoots up (like a steep curve)
Try this:
Run a nested loop with input 500
. Watch how your browser slows down. Now you feel Big O in action.
Key Points
- Big O = how performance grows, not how fast code runs
- We ignore constants (O(2n) becomes O(n))
- Nested loops = 🚨 Warning sign for O(n²)
- It shows the worst case, not average or best
So
Big O isn’t about memorizing formulas — it’s about understanding how your code behaves when inputs get huge.
Ask yourself:
“If I double the input, does my code take twice as long? Or 4x as long?”
That’s what Big O answers — and that’s why it’s a must-know tool for every developer.
Big O can feel scary, but once you learn to simplify it, everything becomes much easier.
You don’t need to do exact math — you just need to understand how your code grows as input increases.
If someone says a function takes O(7n + 12)
time, you don’t need to worry about the +12
or the 7
.
Because Big O ignores the small stuff.
All that matters is how your function scales with large inputs.
So,
O(7n + 12) → O(n)
The 3 Rules of Simplifying Big O
1️⃣ Drop Constants
O(4n)
→O(n)
O(1000000)
→O(1)
Because constants don’t change the growth trend — just the starting point.
2️⃣ Keep the Biggest Term
O(n² + 10n + 42)
→O(n²)
O(n + log n)
→O(n)
As n
gets huge, the biggest term dominates.
3️⃣ Nested Work = Multiply
for (let i = 0; i < n; i++) {
for (let j = 0; j < n; j++) {
// work
}
}
This is O(n * n)
→ O(n²)
Each loop stacks on top of the other.
Common Big O Complexities
O(1) — Constant — Accessing an array element
O(log n) — Logarithmic (shrinks fast) — Binary search
O(n) — Linear — Looping over an array
O(n log n) — ”Fast-ish” sorting — Merge sort, quick sort (best case)
O(n²) — Quadratic (nested loops) — Comparing all pairs
O(2ⁿ) — Exponential (very slow) — Brute-force all possibilities
O(n!) — Factorial (insanely slow) — Permutations of n items
Key Points on How to Simplify Big O
✅ Drop constants (ignore +5, ×7, etc.)
✅ Keep the term with the biggest growth
✅ Multiply if loops are nested
✅ Focus on how it scales — not on exact math
✅ Always ask: “How does this function behave when input gets really big?”
So far, we’ve talked about time complexity (how fast your code runs). But there’s another important question:
How much memory is your code using?
Because if your function runs in 1 second but eats up 3 GB of memory, that’s a problem.
That’s where space complexity comes in.
What Is Space Complexity?
Space Complexity = How much extra memory your algorithm uses as input grows.
- We use Big O notation here too
- But we measure space, not time
- We ignore input size itself (unless stated)
- We focus on auxiliary space — the extra memory used
What Takes Up Space in Code?
Here’s a quick reference:
Numbers / Booleans — O(1) — Fixed space, no matter the value.
Strings — O(n) — Grows with number of characters.
Arrays — O(n) — Grows with number of elements.
Objects — O(n) — Grows with number of keys/values.
Example: Just a Counter
function countPositive(nums) {
let count = 0;
for (let i = 0; i < nums.length; i++) {
if (nums[i] > 0) count++;
}
return count;
}
- ✅ Only one variable:
count
- ✅ Doesn’t store new data
🧠 Space Complexity: O(1) — constant space
Example: Make a New Array
function multiplyByThree(arr) {
const result = [];
for (let i = 0; i < arr.length; i++) {
result.push(arr[i] * 3);
}
return result;
}
- ❗ Creates a new array
- If input has 100 items, output does too
🧠 Space Complexity: O(n) — space grows with input
Example: Modify In-Place
function incrementAll(arr) {
for (let i = 0; i < arr.length; i++) {
arr[i] += 1;
}
return arr;
}
- ✅ Modifies original array
- ❌ Doesn’t create new one
🧠 Space Complexity: O(1) — constant space
Important
Just because a loop runs many times doesn’t mean the space is big.
Space complexity only cares about how much new memory your code uses — not how long it runs.
Space Complexities
O(1) — Constant space — Counters, flags, booleans.
O(n) — Grows with input — New array, copied string, Set.
O(n²) — Grows with 2D/nested data — Matrix, table of comparisons.
Time vs Space Trade-Off
Some algorithms save time by using more space, and vice versa.
Example:
To check for duplicates in an array:
- ❌ Brute force → O(n²) time, O(1) space
- ✅ Using
Set
→ O(n) time, O(n) space
You pick based on what matters more: speed or memory.
Key Points
- Space complexity = How much memory your algorithm needs
- O(1) = Best (no extra memory), but not always possible
- Arrays/Strings/Objects = Usually O(n)
- Fast code can still use too much memory
- Good code balances speed AND space
Let’s Talk About Logs — And Why They Make Code Fast
When you hear “logarithm” you might think of complicated math. But in programming, logs are your friend — especially when it comes to making your code run fast.
Let’s simplify what logs really are, when they show up in code, and why they’re awesome in Big O notation.
Forget the formulas. A log just answers this:
“How many times can I divide a number by 2 until I get 1?”
Example:
log₂(8) = 3
// because 8 → 4 → 2 → 1 (3 steps)
So, log₂(16) = 4, and log₂(32) = 5, and so on.
It’s just the reverse of exponents:
2³ = 8 → log₂(8) = 3
That’s it!
Why Logs Are Awesome in Big O
Say you have a sorted array with 1,000,000 items and want to find one number.
If you use a normal linear search:
- It might take up to 1,000,000 steps ➝ O(n)
But if you use binary search:
- It cuts the array in half each time.
- It finds the number in about 20 steps!
That’s O(log n) — way faster!
Example: Binary Search
Here’s the classic log-based algorithm:
function binarySearch(sortedArr, target) {
let left = 0;
let right = sortedArr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (sortedArr[mid] === target) return mid;
else if (sortedArr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
✅ Every time it cuts the problem in half, not linearly.
That’s why it’s O(log n), not O(n).
You don’t need to calculate exact log values.
You just need to recognize when logs happen:
If the algorithm shrinks the problem by half every time, it’s likely O(log n).
Where Logs Show Up in Coding
You’ll see O(log n) in:
- 🔍 Binary Search
- 🧠 Divide-and-Conquer Algorithms
- 🌲 Balanced Trees (like AVL, Red-Black Trees)
- 📦 Heap Operations (like
heapify
, priority queues) - 🔃 Merge Sort / Quick Sort (best or average cases)
Any time a problem is solved by breaking it into halves or parts, logs are lurking.
O(1) — Flat line — blazing fast.
O(log n) — Slight rise — very fast.
O(n) — Straight line — okay.
O(n²) — Steep curve — gets slow fast.
O(log n) is just behind O(1) in speed. That’s why devs love it.
Final Thought
Don’t fear logs — embrace them.
They’re not some scary math concept — they’re a sign that your algorithm is smart and scalable.
So next time you see O(log n)
, give yourself a high five 🙌
It means you’re writing efficient code.
After going through these concepts, I really feel that I am learning and that I can achieve my goal of learning DSA.
Thanks for reading!
— Ranjan