Menu

How to Remove Duplicates from a List in C# Efficiently

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

0
74

Related

Storing passwords as plain text is dangerous. Instead, you should hash them using a strong, slow hashing algorithm like BCrypt, which includes built-in salting and resistance to brute-force attacks.

Step 1: Install BCrypt NuGet Package

Before using BCrypt, install the BCrypt.Net-Next package:

dotnet add package BCrypt.Net-Next

or via NuGet Package Manager:

Install-Package BCrypt.Net-Next

Step 2: Hash a Password

Use BCrypt.HashPassword() to securely hash a password before storing it:

using BCrypt.Net;

string password = "mySecurePassword123";
string hashedPassword = BCrypt.HashPassword(password);

Console.WriteLine(hashedPassword); // Output: $2a$12$...

Step 3: Verify a Password

To check a user's login attempt, use BCrypt.Verify():

bool isMatch = BCrypt.Verify("mySecurePassword123", hashedPassword);
Console.WriteLine(isMatch); // Output: True

Ensuring proper hashing should be at the top of your list when it comes to building authentication systems.

2
217

Closing a SqlDataReader correctly prevents memory leaks, connection issues, and unclosed resources. Here’s the best way to do it.

Use 'using' to Auto-Close

Using using statements ensures SqlDataReader and SqlConnection are closed even if an exception occurs.

Example

using (SqlConnection conn = new SqlConnection(connectionString))
{
    conn.Open();
    using (SqlCommand cmd = new SqlCommand("SELECT * FROM Users", conn))
    using (SqlDataReader reader = cmd.ExecuteReader())
    {
        while (reader.Read())
        {
            Console.WriteLine(reader["Username"]);
        }
    } // ✅ Auto-closes reader here
} // ✅ Auto-closes connection here

This approach auto-closes resources when done and it is cleaner and less error-prone than manual closing.

⚡ Alternative: Manually Close in finally Block

If you need explicit control, you can manually close it inside a finally block.

SqlDataReader? reader = null;
try
{
    using SqlConnection conn = new SqlConnection(connectionString);
    conn.Open();
    using SqlCommand cmd = new SqlCommand("SELECT * FROM Users", conn);
    reader = cmd.ExecuteReader();

    while (reader.Read())
    {
        Console.WriteLine(reader["Username"]);
    }
}
finally
{
    reader?.Close();  // ✅ Closes reader if it was opened
}

This is slightly more error prone if you forget to add a finally block. But might make sense when you need to handle the reader separately from the command or connection.

0
73

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

3
224