You still have passwords. Now what? - Join World Password Day webinar on May 7th. Save your spot.

Resource centerBest Practices
16 data security best practices for IT teams

16 data security best practices for IT teams

Data security best practices protect sensitive data across its full lifecycle, covering classification and access controls, encryption, change monitoring, and incident response. Organizations that treat these controls as isolated checkboxes rather than a connected program consistently fail audits and suffer preventable breaches. This guide covers 16 practices with compliance framework alignment to help IT and security teams build a program that holds up in both audits and real incidents.

The average data breach now costs organizations $4.44 million globally, according to IBM's 2025 Cost of a Data Breach Report. That figure reflects direct costs, including incident response, legal exposure, and regulatory penalties, but doesn't account for operational disruption, reputational damage, or the compounding risk of a second breach following the first.

Most breaches aren't the result of sophisticated attacks on hardened systems. They follow from the same gaps that organizations have been warned about for years: unclassified sensitive data stored in unsecured locations, over-provisioned access rights that outlive the business need that created them, unmonitored changes to critical systems, and unpatched vulnerabilities. The controls that close those gaps are well established, the gap is in applying them consistently.

This guide covers the 16 data security best practices that matter most: what they are, why they work, and how to implement them in a mid-market or enterprise environment.

What is data security?

Data security is the set of controls, policies, and technologies that protect digital information from unauthorized access, corruption, theft, or loss across its full lifecycle, from creation and storage through use, transmission, and deletion.

It covers people, processes, and technology, and applies to data regardless of where it lives: on-premises servers, cloud environments, endpoints, or databases.

Data security is distinct from but related to cybersecurity and data privacy. Cybersecurity is the broader discipline of protecting systems, networks, and programs from digital attacks. Data privacy governs the lawful collection, use, and retention of personal data based on consent and regulatory obligation. A complete data protection program requires all three, but data security is the operational layer where most of the day-to-day work lives.

Data security is distinct from but related to data privacy: data security controls protect data from unauthorized access, while privacy governs how that data is lawfully collected and used.

Why is data security important?

A strong data security program protects the organization's ability to operate, comply with regulations, and maintain the trust that makes doing business possible.

Financial exposure is measurable and growing

The IBM 2025 Cost of a Data Breach Report places the global average breach cost at $4.44 million, covering incident response, legal exposure, regulatory penalties, and notification. For mid-market organizations with leaner security teams, the financial impact of a single incident is often disproportionate to the resources available to recover.

Regulatory compliance carries direct legal risk

GDPR, CCPA, HIPAA compliance, and PCI DSS impose mandatory security obligations on organizations that handle personal or regulated data. Demonstrating compliance requires documented controls, audit trails, and evidence of ongoing monitoring. GDPR fines alone can reach 4% of global annual turnover.

Operational continuity depends on data availability

Recovery from a ransomware incident can take weeks, with service disruptions measured in lost revenue and customer trust. Data corruption that goes undetected is equally damaging: operations that depend on silently altered data face decisions built on a compromised foundation.

Reputational damage outlasts the incident

Customer trust recovery takes significantly longer than technical recovery. A breach in a regulated sector affects pipeline, retention, and partner relationships for months or years after the incident, costs that rarely appear in breach estimates and are consistently underweighted in security investment decisions.

The attack surface is expanding

Remote work, cloud adoption, and mobile device proliferation have replaced the defined network perimeter with a distributed environment spanning multiple providers, personal devices, and third-party integrations. Data security programs that haven't evolved alongside that environment are securing a boundary that no longer exists.

Netwrix Access Analyzer resolves nested AD groups and SharePoint inheritance to surface overexposed sensitive data. Download a free trial

16 data security best practices

The practices below cover the full scope of an enterprise data security program, from knowing what data you have to testing whether your defenses hold.

1. Identify and classify sensitive data

Effective data security starts with knowing exactly what types of data you have and where they live. Data discovery technology scans your data repositories and reports on findings. From there, you organize data into categories using a classification process.

A data discovery engine typically uses regular expressions and pattern matching for its searches, allowing for flexibility across structured and unstructured data.

Using data discovery and classification technology helps you control whether users can access critical data and prevents it from being stored in unsecured locations, reducing the risk of improper exposure and data loss.

Label all critical or sensitive data with a digital signature denoting its classification, so it can be protected according to its value.

Classify data based on its sensitivity and value. A common classification taxonomy includes:

  • Public: Data that doesn't require special protection and can be shared freely.
  • Private: Data that employees can access but that should be protected from the general public.
  • Confidential: Information that may be shared only with selected users, such as proprietary information and trade secrets.
  • Restricted: Highly sensitive data such as medical records, financial information, and personally identifiable information that is protected by regulation.

Put controls in place to prevent users from improperly modifying the classification level of data. Only selected users should be able to downgrade a classification, since that makes the data more widely available.

Automated classification addresses the scalability problem that makes manual approaches inconsistent and error-prone, and it's the foundation of any mature data classification for compliance programs.

2. Create a data usage policy

Data classification alone isn't sufficient. You also need a policy that specifies the types of access permitted for each classification level, the conditions under which data can be accessed, who holds access rights, and what constitutes appropriate use. All policy violations should have clear, documented consequences.

Assign ownership to someone who understands the organization's objectives and the applicable compliance regulations. Communicate the policy to all users, enforce it consistently, and review it when regulations or the data landscape change.

3. Implement access controls

Appropriate access controls restrict access to data based on the principle of least privilege: each user receives only those privileges essential to their assigned responsibilities, no more. Access controls operate across three layers.

Administrative controls

These are the procedures and policies that all employees must follow. A security policy lists acceptable actions, the level of risk the organization is willing to accept, and the penalties for violations.

Key components include a supervisory accountability structure (managers are held responsible for the activities of their staff), training programs that educate users on data usage policies and reinforce understanding periodically, and an effective employee termination procedure that ensures departing staff lose access to IT infrastructure immediately.

Implement access controls in every application that uses role-based access control (RBAC), such as Active Directory groups.

Technical controls

These govern how data is stored and who can reach it. In most cases, users shouldn't copy or store sensitive data locally. Manipulate data remotely instead. Thoroughly clear caches for both client and server after a user logs off or a session times out.

Enforce least-privilege user permissions: Full Control, Modify, Read and Execute, Read, and Write permissions should be granted strictly according to role requirements.

Access control lists (ACLs) define who can access which resources at what level and can be based on whitelists (permitted items) or blacklists (prohibited items).

In Microsoft Windows, NTFS permissions are configured at the file system level and form the basis of most ACL implementations.

Several security devices further restrict data access:

  • Data loss prevention (DLP) systems monitor workstations, servers, and networks to ensure sensitive data isn't deleted, removed, moved, or copied without authorization, and track who is transmitting it.
  • Firewalls isolate one network from another, preventing undesirable traffic from entering the organization's network and blocking data leakage to unauthorized third-party servers.
  • Network access control (NAC) restricts the availability of network resources to endpoint devices that comply with your security policy, preventing unauthorized devices from accessing data directly.
  • Proxy servers act as intermediaries when client software requests resources from other servers, evaluating and filtering requests to restrict access to sensitive data from the internet.

Physical controls

Physical controls are frequently overlooked but can lead to complete compromise if neglected. Lock down every workstation so it can't be removed from the area. Lock computer cases to prevent hard drives from being removed.

Enable UEFI Secure Boot and configure TPM 2.0 with BitLocker pre-boot authentication to prevent attackers from booting into unauthorized operating systems or accessing disk contents from removable media.

Avoid using public Wi-Fi without a VPN or SSH connection. Mobile devices that can access sensitive data should require complex passwords and use the same access controls and security software as other endpoints.

Network segmentation divides the network into functional zones, each assigned different data classification rules and security levels. Segmentation limits the potential damage from a security incident to a single zone, forcing attackers to compromise each segment individually, a process that dramatically increases exposure to detection.

Netwrix Auditor records before-and-after values for access and change events across hybrid Microsoft environments. Download a free trial

4. Implement change management and database auditing

Tracking all database and file server activity is a foundational security control. Monitoring access and changes to sensitive information and associated permissions provides the historical visibility needed to detect unauthorized activity, investigate incidents, and demonstrate compliance.

Retain login activity for at least one year for security audits. Automatically report any account that exceeds the maximum number of failed login attempts to the information security administrator for investigation.

Using historical information to understand what data is sensitive, how it's being used, who's using it, and where it's going helps build accurate, effective policies and surfaces previously unknown risks.

5. Use data encryption

Encryption is one of the most fundamental data security controls. Encrypt all critical business data both at rest and in transit, whether on portable devices or during network transfer.

Portable systems that store sensitive data of any kind should use encrypted disk solutions. Encrypting the hard drives of desktop systems that contain critical or proprietary information protects essential data even if physical devices are stolen.

On Windows systems, BitLocker provides full-disk encryption and is Microsoft's recommended approach for protecting data at rest on managed devices.

Microsoft Purview Information Protection extends encryption to files and emails based on sensitivity labels, enforcing access controls that travel with the data.

Encrypting File System (EFS) remains available in Windows but is no longer the recommended approach for most environments.

Unauthorized users receive an "access denied" error. BitLocker complements EFS by providing an additional layer of protection for data stored on Windows devices, protecting lost or stolen devices and offering secure data disposal when devices are decommissioned.

Hardware-based encryption via a Trusted Platform Module (TPM) can be applied in addition to software-based encryption. A TPM chip stores cryptographic keys, passwords, or certificates and can assist with hash key generation for whole-disk encryption solutions like BitLocker.

6. Apply zero trust principles

Traditional perimeter-based security assumes that users inside the network can be trusted. Zero trust removes that assumption. Authenticate, authorize, and continuously validate every access request, regardless of its origin, before granting access.

Applying zero trust to data security means requiring verification at the data layer, not just the network perimeter. Grant access based on confirmed identity, device health, and context.

Replace standing access to sensitive data with just-in-time access grants that expire. Constrain lateral movement through microsegmentation. The underlying principle is that trust is never always earned and continuously re-evaluated.

For organizations running Microsoft environments, zero trust implementation typically involves enforcing MFA across all access paths, deploying conditional access policies in Entra ID, and governing privileged access through time-limited elevation rather than persistent admin rights.

7. Implement data security posture management (DSPM)

Data security posture management (DSPM) continuously discovers, classifies, and assesses the security posture of data across cloud and on-premises environments. It identifies sensitive data that is over-exposed, misconfigured, or inadequately protected, and provides remediation guidance.

Where traditional data security tools operate on what IT knows about, DSPM operates on the entire data landscape, including data that has never been inventoried, cloud repositories provisioned outside standard IT processes, and sensitive data that has drifted into locations with inappropriate access controls.

For organizations managing hybrid environments, DSPM closes the visibility gap that point tools leave.

Netwrix Access Analyzer provides enterprise DSPM capabilities, combining sensitive data discovery, classification, data access governance, and risk prioritization in a single solution. It identifies excessive permissions, detects open access to sensitive data, and surfaces high-risk permission configurations across file servers, SharePoint, and other data repositories.

8. Secure cloud data environments

Cloud environments introduce unique data security challenges that on-premises tools and processes can't fully address: shared infrastructure, API-accessible attack surfaces, rapid provisioning that creates ungoverned data stores, and provider-dependent controls that vary by service model.

Key cloud data security practices include:

  • Classifying data before migration to understand what is moving and what protection it requires.
  • Enforcing least privilege on cloud IAM roles with the same rigor applied to on-premises accounts.
  • Encrypting data at rest and in transit for every cloud workload.
  • Monitoring for misconfiguration continuously rather than periodically.
  • Ensuring service contracts define clear data ownership, storage location, and deletion obligations.

Choosing a cloud service provider doesn't transfer compliance obligations. The responsibility for ensuring cloud-based applications and services continuously meet regulatory requirements remains with your organization.

Reputable providers hold certifications such as ISO 27001 and SOC 2 and consent to independent audits. Verify these before committing to a provider.

9. Back up your data

Duplicate critical business assets to provide redundancy and serve as backups. From a security standpoint, there are three primary backup types:

  • Full: All data is archived. Time-consuming and resource-intensive, with significant impact on server performance.
  • Differential: All changes since the last full backup are archived. Less impact than full backups.
  • Incremental: All changes since the last backup of any type are archived.

Organizations typically use a combination: a full backup at a fixed interval with differential or incremental backups in between. If the system fails shortly after the full backup, restore from that; if it fails later, use a combination.

Whatever backup strategy you choose, test it periodically by restoring backup data to a test machine. Store backups in multiple geographic locations to ensure recovery from physical disasters such as fires or hardware failures. Encrypt all backups containing sensitive data.

Netwrix Access Analyzer resolves nested AD groups and SharePoint inheritance to surface overexposed sensitive data. Request a free trial

10. Build a ransomware-resilient backup architecture

Backups only provide recovery assurance if attackers can't reach them. Ransomware operators routinely target and destroy recovery points before deploying encryption, which means a backup that a compromised admin account can delete is not a reliable control.

The current standard is the 3-2-1-1-0 rule: three copies of data, on two different media types, with one copy offsite, one copy immutable or air-gapped, and zero errors verified through regular restore testing.

Immutable storage uses Write Once Read Many (WORM) controls that prevent modification or deletion for a defined retention period, even by privileged accounts.

In hybrid environments, implement this by enabling S3 Object Lock in compliance mode or Azure immutable blob storage for cloud-tier backups, and deploying hardened Linux repositories or purpose-built immutable appliances on premises.

Run the backup plane on a separate identity store from production, require phishing-resistant MFA on all backup consoles, and restrict backup administration to a dedicated privileged role. Test recovery quarterly in an isolated environment to confirm backups are usable before you need them.

11. Govern non-human identities and secrets

Service accounts, API keys, OAuth tokens, CI/CD pipeline credentials, certificates, and cloud IAM roles now outnumber human identities in most enterprise environments, and they carry privileges without the behavioral signals that make human account compromise detectable.

The OWASP 2025 Non-Human Identities Top 10 catalogues the most common failure modes: improper offboarding, secret leakage, overprivileged service accounts, and credential reuse across environments.

Most non-human identities (NHIs) operate with static, long-lived credentials that are never rotated, embedded in source code or configuration files, with no owner accountable for their use.

Centralize secret storage in a dedicated vault such as HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Eliminate hardcoded credentials from source code, container images, and CI/CD configuration.

Where possible, replace static credentials with short-lived dynamically generated tokens using workload identity federation or OIDC trust. Maintain an inventory of NHIs across on-premises and cloud tenants, assign a human owner to each, and automate rotation and deprovisioning when applications retire.

12. Harden your systems

Any technology that could store sensitive data, even temporarily, should be adequately secured based on the type of information it can access. The first step is making sure the operating system is configured to be as secure as possible.

  • Operating system baseline: Most operating systems run unnecessary services by default that give attackers additional avenues into your system. Enable only the programs and listening services essential for employees to do their jobs. Disable anything without a business purpose. Create a secure baseline OS image for typical employees to reduce configuration drift and provide a starting point for exception management.
  • Windows: Disable NTLMv1 and enforce NTLMv2 or Kerberos-only authentication. Disable SMBv1 and enforce SMB signing. Apply CIS benchmarks or DISA STIGs as your hardening baseline. Enable logging for critical system events.
  • Linux: Disable unnecessary services and ports, disable trust authentication used by "r commands," disable unnecessary setuid and setgid programs, and reconfigure user accounts for only the necessary users.
  • Web servers: Limit traffic to only what is required for your business. Ensure users are granted only the permissions they need for their tasks. Thoroughly test, debug, and approve all executable scripts before deployment.
  • Email servers: Add an active antivirus scanner to email servers to reduce viruses introduced into your network. For Exchange mail stores, use a dedicated email antivirus scanner capable of detecting phishing and other social engineering attacks.
  • FTP servers: FTP is inherently insecure; many FTP systems transmit credentials unencrypted. Create a separate drive or subdirectory for file transfers. Use VPN or SSH connections where possible. Disable anonymous access. The most effective approach is to replace FTP entirely with SFTP.

13. Implement a proper patch management strategy

Establish a patching strategy for both operating systems and applications. Ensuring all application versions in your IT environment are up to date can be tedious, but it's essential for data protection.

One of the best ways to ensure security is to enable automatic antivirus and system updates. For critical infrastructure, test patches thoroughly before deployment to ensure they don't impact functionality or introduce vulnerabilities.

Operating system patch management

There are three types of operating system patches, each with a different level of urgency:

  1. Hotfix: An immediate and urgent fix, typically addressing serious security issues. These are not optional.
  2. Patch: Provides additional functionality or a non-urgent fix. Sometimes optional.
  3. Service pack: The complete set of updates and patches to date. Always apply these.

Test all updates before applying them to production to confirm they don't cause issues.

Application Patch Management

It's also necessary to regularly update and patch applications. Once a vulnerability is discovered in an application, an attacker can exploit it to gain access to or compromise a system. Most vendors release patches regularly, and you should routinely check for new ones.

Many attacks today target client systems for the simple reason that customers don't always manage patching effectively. Establish dedicated maintenance days for patching and testing all critical applications.

14. Protect against insider threats

While organizations spend significant resources protecting networks from external attacks, insider threats remain a leading cause of data exposure. Insider threats take two forms. An authorized insider is someone who misuses their rights and privileges, either accidentally, deliberately, or because their credentials were stolen.

An unauthorized insider is someone who has connected to the network behind its external defenses: someone who plugged into a jack in the lobby, or someone accessing an unprotected wireless network connected to the internal network.

Monitor internal activity as rigorously as perimeter activity. With users increasingly working remotely, securing remote connections is equally important. Require strong authentication for all remote connections. Adequately secure devices used for remote network access. Log all remote sessions.

Netwrix 1Secure governs what AI agents can access and tracks every AI-driven data interaction. Request a demo.

15. Use endpoint security systems

Network endpoints are under constant attack, and endpoint security infrastructure is critical to protecting against data breaches, unauthorized programs, and advanced malware. With the increased use of mobile devices, network endpoints are expanding and becoming increasingly undefined.

At a minimum, deploy the following:

  • Antivirus software should be installed and kept up to date on all servers and workstations. In addition to actively monitoring incoming files, it should regularly conduct scans to catch infections that may have slipped through, including ransomware.
  • Anti-spyware tools block or remove spyware: software installed without the user's knowledge to collect personal information or monitor behavior. Regularly scan for spyware, including tracking cookies on hosts.
  • Host-based firewalls are software-based firewalls installed on each computer in the network. They filter packets to prevent them from leaving or reaching the system. Large perimeter firewalls can't prevent internal attacks, which are typically carried out by viruses rather than external intrusions. Configure a standard personal firewall according to your organization's needs and export those settings across the environment.
  • Host-based intrusion detection systems (IDS) monitor the system state and verify it is as expected. Most use integrity verification: calculating cryptographic hashes of files to be monitored in a known clean state, then scanning for changes and alerting when a monitored file's fingerprint changes.

16. Perform vulnerability assessments and penetration testing

Regular testing validates that your security controls actually work as intended. Vulnerability assessments use port scanners and scanning tools such as nmap, OpenVAS, and Nessus to scan the environment from an external machine, looking for open ports and the version numbers of running services.

Results can be cross-referenced against known vulnerabilities and expected patch levels to verify that endpoint systems adhere to security policies.

Penetration testing goes further, actively testing systems, networks, or applications to find and exploit security vulnerabilities. It can also test a security policy, assess compliance, evaluate employee security awareness, and test incident detection and response.

Perform penetration testing at least annually. The main strategies used by security professionals are:

  • Targeted testing: Performed collaboratively by the organization's IT team and the testing team; sometimes called the "lights on" approach.
  • External testing: Targets externally visible servers and devices to determine whether an outside attacker can gain entry and how far they could go.
  • Internal testing: Performs an inside attack behind the firewall by an authorized user with standard access privileges, useful for estimating potential insider damage.
  • Blind testing: Simulates a real attacker by severely limiting the information provided to the testing team, typically only the company name.
  • Double-blind testing: Takes blind testing further; only one or two people in the organization know a test is being conducted.
  • Black box testing: Penetration testers receive no information before the test and must find their own way in.
  • White box testing: Provides testers with information about the target network (IP addresses, infrastructure schematics, protocols) before testing begins.

How Netwrix supports data security

Identifying sensitive data, governing who can access it, and maintaining a clear record of what changes across your environment are the three practices where most data security programs have the most to gain, and where manual approaches consistently fall short.

Netwrix Access Analyzer discovers and classifies sensitive data across file servers, SharePoint, and other data repositories, maps who has access to it, and identifies excessive permissions and open-access configurations that create risk. It provides the data security posture management capabilities that organizations need to move from reactive incident response to proactive risk reduction.

Netwrix Auditor delivers searchable, before-and-after visibility into changes across Active Directory, file servers, Microsoft 365, and other systems, producing the audit trail that compliance frameworks require and investigators need. It gives security teams the context to understand what happened, who did it, and when, without digging through incomplete native logs.

Request a demo to see how Netwrix helps organizations discover, classify, and govern sensitive data across hybrid environments.

Frequently asked questions about data security

Share on