C# foreach vs for loop: Which is faster and when to use each

When it comes to iterating over collections in C#, the performance difference between foreach and for loops primarily depends on the collection type being traversed.

For arrays and Lists, a traditional for loop with indexing can be marginally faster because it avoids the overhead of creating an enumerator object, especially in performance-critical scenarios.

The foreach loop internally creates an IEnumerator, which adds a small memory allocation and method call overhead.

However, for most modern applications, this performance difference is negligible and often optimized away by the JIT compiler.

The readability benefits of foreach typically outweigh the minor performance gains of for loops in non-critical code paths.

Collections like LinkedList or those implementing only IEnumerable actually perform better with foreach since they don't support efficient random access.

The rule of thumb: use foreach for readability in most cases, and only switch to for loops when benchmarking shows a meaningful performance improvement in your specific high-performance scenarios.

Example

// Collection to iterate
List<int> numbers = Enumerable.Range(1, 10000).ToList();

// Using for loop
public void ForLoopExample(List<int> items)
{
    int sum = 0;
    for (int i = 0; i < items.Count; i++)
    {
        sum += items[i];
    }
    // For loop can be slightly faster for List<T> and arrays
    // because it avoids creating an enumerator
}

// Using foreach loop 
public void ForEachLoopExample(List<int> items)
{
    int sum = 0;
    foreach (int item in items)
    {
        sum += item;
    }
    // More readable and works well for any collection type
    // Preferred for most scenarios where performance isn't critical
}

// For a LinkedList, foreach is typically faster
public void LinkedListExample(LinkedList<int> linkedItems)
{
    int sum = 0;
    // This would be inefficient with a for loop since LinkedList
    // doesn't support efficient indexing
    foreach (int item in linkedItems)
    {
        sum += item;
    }
}
4
212

Related

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
313

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

4
293

In C#, you can format an integer with commas (thousands separator) using ToString with a format specifier.

int number = 1234567;
string formattedNumber = number.ToString("N0"); // "1,234,567"
Console.WriteLine(formattedNumber);

Explanation:

"N0": The "N" format specifier stands for Number, and "0" means no decimal places. The output depends on the culture settings, so in regions where , is the decimal separator, you might get 1.234.567.

Alternative:

You can also specify culture explicitly if you need a specific format:

using System.Globalization;

int number = 1234567;
string formattedNumber = number.ToString("N0", CultureInfo.InvariantCulture);
Console.WriteLine(formattedNumber); // "1,234,567"
4
415