My problem is having issue with SQL Server 2014 Database and its size growing and consuming Hard disk is almost full, Only 50GB space is left in 1TB Hard drive, and SQL database log (.ldf) file is consuming 800GB space and continue increasing, We can Shrink the database-log but there are several blogs which says it effects the server performance and data loss, or we can increase the space of the hard disk, Please advise.
Any Help or suggestions much appreciated.
Although the question Why Does the Transaction Log Keep Growing or Run Out of Space? provides lots of information on transaction log management, it does not mention the main performance problem of Virtual Log Files, which I will address below.
Since your question is about managing size and performance of the transaction log, I'll write about each subject individually.
The transaction log records all transactions and database modifications made by each transaction, and several maintenance actions have an affect on the space requirements of the transaction log. When managing the size of the transaction log, you will need to schedule all maintenance activities with an understanding of how they will affect growth and how they won't. Maintenance activities include transaction log backups, updating statistics, rebuilding indexes and other regularly scheduled activities. Your transaction log contains active virtual log files and inactive virtual log files in on single physical file. Active log files contain transactions that have not been backed up and inactive log files are available for reuse and are mainly the free space in the log file. Generally, when pages in the database change, those changes are recorded in the transaction log. During an operation like rebuilding all indexes, pages will change at a rapid pace and could prompt the database engine to autogrow the log file. Autogrowths actions are recorded in the default trace. I've experienced rapid growth due to index rebuilding, but I had the space to accommodate so I didn't make any changes. You could increase the frequency of the transaction log backups during index rebuilds, assuming you use full recovery model, and distribute the index rebuilds over time so that the growth is mitigated.
I've experienced poor performance in log files to the point where some of the applications developed ineternally were slowing noticibly and timing out. It was due to excessive number of small virtual log files. The size and number of virtual log files contined in one physical file is directly related to the autogrowth settings. To check if you have excessive VLFs, run DBCC Loginfo. If the resulting number of records is in the hundreds or thousands, then perform the following procedure:
- Change the autogrowth settings to something reasonable. It's difficult to make a recommendation in your case because an 800GB log file is highly unusual for a 96GB database. It needs to be a value that is large enough so that the database engine only creates a minimal amount of VLFs. I would try 500MB to 1GB.
- Shrink the file as small as possible
- Reinitialize the file. Usually this step is to reinitialize the file back to it's original size. If you have no idea what size to reinitialize it, try 50GB.
Allowing it to grow without reinitializing it will allow the VLF issue to occur again and contributes to higher IO because each autogrowth of the log file needs to be completely written out with zeros before it can be used. This is why shrinking the log file causes poor performance.