Menu

How to Measure Execution Time of a Method in C#

Measuring the execution time of C# methods is essential for performance optimization and identifying bottlenecks in your application.

The most straightforward approach uses the Stopwatch class from the System.Diagnostics namespace, which provides high-precision timing capabilities.

This approach is perfect for quick performance checks during development or when troubleshooting specific methods in production code.

Here's a practical example: Imagine you have a method that processes a large dataset and you want to measure its performance.

First, add using System.Diagnostics; to your imports. Then implement timing as shown below:

public void MeasurePerformance()
{
    Stopwatch stopwatch = new Stopwatch();
    
    // Start timing
    stopwatch.Start();
    
    // Call the method you want to measure
    ProcessLargeDataset();
    
    // Stop timing
    stopwatch.Stop();
    
    // Get the elapsed time
    Console.WriteLine($"Processing time: {stopwatch.ElapsedMilliseconds} ms");
    // Or use ElapsedTicks for higher precision
    Console.WriteLine($"Processing ticks: {stopwatch.ElapsedTicks}");
}

For more advanced scenarios, consider using the BenchmarkDotNet library, which offers comprehensive benchmarking with statistical analysis.

Simply install the NuGet package, decorate methods with the [Benchmark] attribute, and run BenchmarkRunner.Run<YourBenchmarkClass>() to generate detailed reports comparing different implementation strategies.

1
327

Related

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

4
305

When working with URLs in C#, encoding is essential to ensure that special characters (like spaces, ?, &, and =) don’t break the URL structure. The recommended way to encode a string for a URL is by using Uri.EscapeDataString(), which converts unsafe characters into their percent-encoded equivalents.

string rawText = "hello world!";
string encodedText = Uri.EscapeDataString(rawText);

Console.WriteLine(encodedText); // Output: hello%20world%21

This method encodes spaces as %20, making it ideal for query parameters.

For ASP.NET applications, you can also use HttpUtility.UrlEncode() (from System.Web), which encodes spaces as +:

using System.Web;

string encodedText = HttpUtility.UrlEncode("hello world!");
Console.WriteLine(encodedText); // Output: hello+world%21

For .NET Core and later, Uri.EscapeDataString() is the preferred choice.

28
1304

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
397