When it comes to iterating over collections in C#, the performance difference between foreach and for loops primarily depends on the collection type being traversed.
For arrays and Lists, a traditional for loop with indexing can be marginally faster because it avoids the overhead of creating an enumerator object, especially in performance-critical scenarios.
The foreach loop internally creates an IEnumerator, which adds a small memory allocation and method call overhead.
However, for most modern applications, this performance difference is negligible and often optimized away by the JIT compiler.
The readability benefits of foreach typically outweigh the minor performance gains of for loops in non-critical code paths.
Collections like LinkedList or those implementing only IEnumerable actually perform better with foreach since they don't support efficient random access.
The rule of thumb: use foreach for readability in most cases, and only switch to for loops when benchmarking shows a meaningful performance improvement in your specific high-performance scenarios.
Example
// Collection to iterate List<int> numbers = Enumerable.Range(1, 10000).ToList(); // Using for loop public void ForLoopExample(List<int> items) { int sum = 0; for (int i = 0; i < items.Count; i++) { sum += items[i]; } // For loop can be slightly faster for List<T> and arrays // because it avoids creating an enumerator } // Using foreach loop public void ForEachLoopExample(List<int> items) { int sum = 0; foreach (int item in items) { sum += item; } // More readable and works well for any collection type // Preferred for most scenarios where performance isn't critical } // For a LinkedList, foreach is typically faster public void LinkedListExample(LinkedList<int> linkedItems) { int sum = 0; // This would be inefficient with a for loop since LinkedList // doesn't support efficient indexing foreach (int item in linkedItems) { sum += item; } }
When working with SQL Server, you may often need to count the number of unique values in a specific column. This is useful for analyzing data, detecting duplicates, and understanding dataset distributions.
To count the number of unique values in a column, SQL Server provides the COUNT(DISTINCT column_name) function. Here’s a simple example:
COUNT(DISTINCT column_name)
SELECT COUNT(DISTINCT column_name) AS distinct_count FROM table_name;
This query will return the number of unique values in column_name.
column_name
If you need to count distinct combinations of multiple columns, you can use a subquery:
SELECT COUNT(*) AS distinct_count FROM (SELECT DISTINCT column1, column2 FROM table_name) AS subquery;
This approach ensures that only unique pairs of column1 and column2 are counted.
column1
column2
By leveraging COUNT(DISTINCT column_name), you can efficiently analyze your database and extract meaningful insights. Happy querying!
Reading a file line by line is useful when handling large files without loading everything into memory at once.
✅ Best Practice: Use File.ReadLines() which is more memory efficient.
foreach (string line in File.ReadLines("file.txt")) { Console.WriteLine(line); }
Why use ReadLines()?
Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).
Alternative: Use StreamReader (More Control)
For scenarios where you need custom processing while reading the contents of the file:
using (StreamReader reader = new StreamReader("file.txt")) { string? line; while ((line = reader.ReadLine()) != null) { Console.WriteLine(line); } }
Why use StreamReader?
Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).
When to Use ReadAllLines()? If you need all lines at once, use:
string[] lines = File.ReadAllLines("file.txt");
Caution: Loads the entire file into memory—avoid for large files!
Slow initial load times can drive users away from your React application. One powerful technique to improve performance is lazy loading - loading components only when they're needed.
Let's explore how to implement this in React.
By default, React bundles all your components together, forcing users to download everything upfront. This makes navigation much quicker and more streamlined once this initial download is complete.
However, depending on the size of your application, it could also create a long initial load time.
import HeavyComponent from './HeavyComponent'; import AnotherHeavyComponent from './AnotherHeavyComponent'; function App() { return ( <div> {/* These components load even if user never sees them */} <HeavyComponent /> <AnotherHeavyComponent /> </div> ); }
React.lazy() lets you defer loading components until they're actually needed:
import React, { lazy, Suspense } from 'react'; // Components are now loaded only when rendered const HeavyComponent = lazy(() => import('./HeavyComponent')); const AnotherHeavyComponent = lazy(() => import('./AnotherHeavyComponent')); function App() { return ( <div> <Suspense fallback={<div>Loading...</div>}> <HeavyComponent /> <AnotherHeavyComponent /> </Suspense> </div> ); }
Combine with React Router for even better performance:
import React, { lazy, Suspense } from 'react'; import { BrowserRouter, Routes, Route } from 'react-router-dom'; const Home = lazy(() => import('./pages/Home')); const Dashboard = lazy(() => import('./pages/Dashboard')); const Settings = lazy(() => import('./pages/Settings')); function App() { return ( <BrowserRouter> <Suspense fallback={<div>Loading...</div>}> <Routes> <Route path="/" element={<Home />} /> <Route path="/dashboard" element={<Dashboard />} /> <Route path="/settings" element={<Settings />} /> </Routes> </Suspense> </BrowserRouter> ); }
Implement these techniques in your React application today and watch your load times improve dramatically!
Register for my free weekly newsletter.