How to Safely Cancel an Async Operation in C#

Asynchronous programming is essential for building responsive applications, but it comes with challenges - particularly when you need to cancel operations.

Here's how to safely implement cancellation in C#.

Using CancellationTokenSource

The key to proper cancellation is the CancellationTokenSource class. This provides a token that can be passed to async methods and monitored for cancellation requests.

// Create a cancellation source with timeout
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
var token = cts.Token;

try
{
    // Pass token to async operations
    await DoLongRunningTaskAsync(token);
}
catch (OperationCanceledException)
{
    // Handle cancellation gracefully
    Console.WriteLine("Operation was canceled");
}
finally
{
    // Always dispose the CancellationTokenSource
    cts.Dispose();
}

Implementing Cancellation in Your Methods

When writing cancellable async methods, check for cancellation at appropriate points:

async Task DoLongRunningTaskAsync(CancellationToken token)
{
    // Check before starting expensive work
    token.ThrowIfCancellationRequested();
    
    for (int i = 0; i < 100; i++)
    {
        // Periodically check during loops
        if (token.IsCancellationRequested)
        {
            // Clean up resources if needed
            CleanupResources();
            
            // Then throw the standard exception
            throw new OperationCanceledException(token);
        }
        
        await Task.Delay(100, token); // Built-in methods accept tokens
    }
}

Best Practices

  1. Always dispose of CancellationTokenSource objects
  2. Use token.ThrowIfCancellationRequested() for cleaner code
  3. Check for cancellation before expensive operations
  4. Pass the token to all nested async calls
  5. Handle OperationCanceledException appropriately in your calling code

By following these patterns, you can ensure your async operations respond promptly to cancellation requests while maintaining clean, resource-efficient code.

28
55

Related

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

4
293

String interpolation, introduced in C# 6.0, provides a more readable and concise way to format strings compared to traditional concatenation (+) or string.Format(). Instead of manually inserting variables or placeholders, you can use the $ symbol before a string to directly embed expressions inside brackets.

string name = "Walt";
string job = 'Software Engineer';

string message = $"Hello, my name is {name} and I am a {job}";
Console.WriteLine(message);

This would produce the final output of:

Hello, my name is Walt and I am a Software Engineer

String interpolation can also be chained together into a multiline string (@) for even cleaner more concise results:

string name = "Walt";
string html = $@"
    <div>
        <h1>Welcome, {name}!</h1>
    </div>";
37
148

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
337