SQL Safe Habits
Messing around with a production database can be dangerous, but sometimes it’s necessary. As a developer, you spend a lot of time doing things that can affect that production data. Sometimes it’s messing with it directly, and sometimes it’s writing the code that will mess with that data. A few safe habits can go a long way to protecting your data, your applications, and your company.
Security
How many times every year do we hear about data breaches where someone left a database exposed to the internet without any form of security. There are a lot of good habits to make use of around securing a production database, or even a dev or UAT one. But first, and foremost is this. A database should never, ever be accessible directly from the internet. Ever. There’s absolutely no reason for any database to be seen by typing in a URL or IP address. It should always be behind a firewall or only accessible via a secure VPN of some sort. You still need to be able to access it when you have a need, but it’s gotta be tougher than anything else. Make it unseeable, and that will be huge.
Use least privilege accounts as well, and never share account login information. If a dev or support person needs access, don’t give them access to the credentials that the application uses. If possible, give them their own account with limited time access and only as much access as they need. If it’s production, it should almost always be read only access. If they absolutely have to make changes, make sure they only get that privilege for as brief a time as needed to make that change.
Audit, audit, audit. Track what every account does. That’s another reason to never share credentials. If everyone has access to the application credentials, you have no idea who actually did what. There’s no accountability, and no way to track who screwed up.
There’s a lot of other security considerations to keep in mind: MFA, Encryption (at rest and in transit both), data masking for dev/UAT, patching, and so forth. It’s not for the faint of heart. Take advantage of experts whenever possible.
Backups
Make regular backups, and TEST those backups. I can’t tell you how many times over the years I have seen production go down, and when they go to restore the backups, they find the backups don’t work. Test the backups and test restoring those backups at least once a month for a production system. Even more if it’s a mission critical system. And DON’T store the backups on the same drive as the database.
Fun story: One place I worked hosted their own JIRA server. All the various JIRA boards for all the clients were on that server. They made regular backups. Those backups were stored on the same hard drive as the database itself. We all came in one morning to find JIRA was unavailable. Turns out, the hard drive had died. All the client task data was gone… forever. Don’t store the backups on the same drive.
Don’t Lock Tables
Sometimes you gotta access the database and look at the data directly. That’s fine. Sometimes you gotta run a big query against a table that will take a while. That’s fine too. The problem is, depending on the database system you’re working in, the default functionality can be to lock a table being queried via a SELECT statement. If you’re running a big, long query, on a production database, on a table that is needed for all kinds of things and other processes, that’s bad. When a table gets locked, no other process can access the data on that table until the lock is released. That can cause production systems to start failing all over the place. SQL Server is especially bad at this and it’s really easy to write bad code against SQL Server. Get to know the proper ways to work with tables and data rows for your particular database system so that you don’t lock tables excessively.
Transactions also lock tables by default in many systems. Always write your transactions to do the absolute minimum necessary to complete the data update in question. Don’t add excessive steps inside the transaction. If your system supports row level transactions instead of table level transactions, even better. Make sure that’s turned on and you write your transactions to take advantage of it.
SQL Server
Get familiar with WITH (NOLOCK). It is your friend. Understand it. Learn it. Love it. Always, always use it as a starting point. Write your queries like this:
SELECT * FROM TableName WITH (NOLOCK)
That’s the basic starting point. A better alternative is to use snapshot isolation:
ALTER DATABASE DatabaseName SET READ_UNCOMMITTED_SNAPSHOT ON;
This will set the database to make use of tempdb to prevent a lot of read/write blocks and locking of tables as much as possible. There are some downsides to it, so make sure you read the docs and understand it before you turn it on. But if you can, take advantage of it.
MySQL/MariaDB
I’m not as familiar with these systems, but I understand they have an equivalent command of LOCK=NONE. Check the docs. It goes something like this:
ALTER TABLE Tablename ADD ColumnName DATATYPE LOCK=NONE;
PostgreSQL
You’re in luck. PostgreSQL generally takes the opposite approach. You tell it when to lock things, and it doesn’t when you don’t. That’s a general statement and there are exceptions, but for instance, a SELECT doesn’t lock anything, the way SQL Server often does by default. Again, there are exceptions, so make sure you understand what you’re doing. But enjoy the default behavior for what it is.
Use Transactions and Rollback
If you need to update data, get in the habit of testing that change first using a transaction with a rollback. Like this:
BEGIN TRANSACTION
UPDATE TableName SET ColumnName = 'New Value' WHERE IdColumn = 'ID Value'
ROLLBACK
I always write an UPDATE statement inside a transaction with ROLLBACK. Always. Well, almost always. But so, so many developers make their first big screwup by deleting an entire table of data because they forgot the WHERE clause, or wrote the clause wrong. It’s a rite of passage. I don’t think you can really consider yourself a senior developer until you’ve done this at least once. If you’re lucky, you only did it on a local or dev database. But a lot of us have done it in prod. (See note above about backups and testing those backups).
Get in the habit of surrounding an UPDATE or DELETE with a transaction and rollback. Run it. Look at the output. Does it match the number of rows you expected to be changed. Test it in a dev or local database first, BEFORE you try it in production. An even better pattern is to add SELECT statements before and after the UPDATE or DELETE.
BEGIN TRANSACTION
SELECT * FROM TableName WHERE IdColumn = 'ID Value'
UPDATE TableName SET ColumnName = 'New Value' WHERE IdColumn = 'ID Value'
SELECT * FROM TableName WHERE IdColumn = 'ID Value'
ROLLBACK
Do the two SELECT statements produce the output you expected? If so, then go ahead and change the ROLLBACK to COMMIT and run your code to make that change real. I can’t tell you how many times this habit has saved my skin. It’s a good habit, and probably one of the best I’ve ever put into my repertoire.
Avoid SQL Injection
Maybe you don’t sling SQL code, but you write Java or C# or JavaScript most of the time. Learn the ways to avoid SQL injection attacks. It’s one of the oldest security fail points and yet it’s still on the OWASP Top 10 list every, single year. You’d think we, as a developer community, would have gotten that one right by now. But no, as you can see, it was still in the 2025 list at #5. Come on, people! Figure it out!
Only Take What You Need
Avoid SELECT * whenever possible. In some systems that can have a slight effect on your performance. But in general it’s a good habit to only pull the data you need. And that doesn’t just apply to the communication between the web UI and the API, but also between the API/data layer and the database itself. The more data you pull, the more bandwidth you use up, the longer it takes, and the less secure that interaction is. Do you really need to pull all 27 columns? Or do you just need the data in 5 of those columns. Sure, for a couple of rows, it’s not that big a deal. But when it’s thousands or tens of thousands of rows, thousands of times an hour, 24 hours a day, 7 days a week, that adds up really fast.
Add to that the potential security implications. Does that table store data that could be used in an unexpected way? If the data isn’t needed on that page, don’t include it. You never know what you might be opening yourself up to unexpectedly.
Data Masking
Don’t, don’t, don’t copy production to UAT or dev environments. At least, don’t do it without implementing data masking. Names, account names and numbers, email addresses, physical addresses, phone numbers, absolutely ANYTHING that could even remotely considered PII should be replaced with test data. If data is copied to lower environments, there should be absolutely no way that any of that data could be used to identify a real person or account. It doesn’t belong there. It doesn’t need to be there.
“But how can it be a real test without ‘real’ data?” I hear you ask. It IS ‘real’ data. It’s just masked so that it isn’t real data. If your code is so badly written that you don’t believe it can be tested effectively without a real copy of production data, you don’t have any business being in software development. Go join the sales department.
Conclusion
Databases and the data in them are the lifeblood of your business. Your business can’t exist without it. That means it’s absolutely critical to develop good and safe habits when you have to work with databases or write the code that will. A little bit of effort up front will go a long way toward saving you a world of pain later on. So go forth and code! Safely!
