Menu

How to Detect and Remove Duplicate Records in SQL Server

Duplicate records in SQL Server can lead to inaccurate reporting, data inconsistencies, and performance issues. In this article, we’ll go over how to identify and safely remove duplicate rows while keeping at least one unique record.

Detecting Duplicates

To find duplicate records in a table, use the GROUP BY and HAVING clauses to count occurrences of each unique combination of values:

SELECT column1, column2, COUNT(*)
FROM YourTable
GROUP BY column1, column2
HAVING COUNT(*) > 1;

Replace column1, column2 with the columns that define a duplicate in your dataset.

If you need to see the actual duplicate rows, use a ROW_NUMBER() approach:

SELECT *
FROM (
    SELECT *, ROW_NUMBER() OVER (PARTITION BY column1, column2 ORDER BY id) AS row_num
    FROM YourTable
) t
WHERE row_num > 1;

Here, id should be a unique column to order the duplicates.

Removing Duplicates

Method 1: Using ROW_NUMBER()

The safest way to remove duplicates while keeping one unique record is by using ROW_NUMBER().

WITH CTE AS (
    SELECT *, ROW_NUMBER() OVER (PARTITION BY column1, column2 ORDER BY id) AS row_num
    FROM YourTable
)
DELETE FROM CTE WHERE row_num > 1;

This deletes all duplicate records while keeping the first occurrence.

Method 2: Using DISTINCT INTO a New Table

If you want to be extra cautious, create a new table with only unique records:

SELECT DISTINCT * INTO NewTable FROM YourTable;

Then, drop the old table and rename NewTable back to YourTable.

Final Thoughts

Always backup your data before running delete operations to prevent accidental data loss. By regularly cleaning up duplicates, you can keep your SQL Server database efficient and reliable.

3
34

Related

Storing passwords as plain text is dangerous. Instead, you should hash them using a strong, slow hashing algorithm like BCrypt, which includes built-in salting and resistance to brute-force attacks.

Step 1: Install BCrypt NuGet Package

Before using BCrypt, install the BCrypt.Net-Next package:

dotnet add package BCrypt.Net-Next

or via NuGet Package Manager:

Install-Package BCrypt.Net-Next

Step 2: Hash a Password

Use BCrypt.HashPassword() to securely hash a password before storing it:

using BCrypt.Net;

string password = "mySecurePassword123";
string hashedPassword = BCrypt.HashPassword(password);

Console.WriteLine(hashedPassword); // Output: $2a$12$...

Step 3: Verify a Password

To check a user's login attempt, use BCrypt.Verify():

bool isMatch = BCrypt.Verify("mySecurePassword123", hashedPassword);
Console.WriteLine(isMatch); // Output: True

Ensuring proper hashing should be at the top of your list when it comes to building authentication systems.

3
273

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

4
319

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
449