Skip to content

Podcast Season 2 Ep. 2 VLFs

  • Host: Steve Stedman / Mitchell Glasscock
  • Recording Date: 1/8/2025
  • Topic: VLFs

Stedman SQL Podcast Sn 2 Ep. 2 VLF

Steve Stedman and Mitchell Glascock discuss the importance of Virtual Log Files (VLFs) in SQL Server, emphasizing their impact on performance and backup processes. They explain that VLFs are sequential chunks within the transaction log file, and too many or too large VLFs can slow down systems, especially during backups. They highlight that VLF counts should be kept under 500-1000 and individual VLF sizes under 1-2 GB. They demonstrate how Database Health Monitor (DHM) helps manage VLFs, including visualizing VLFs, shrinking log files, and setting proper growth settings. They also stress the importance of monitoring and alerting systems to prevent issues like running out of disk space due to unmanaged log files.

Podcast Transcript

Steve Stedman  00:15 Welcome to the Stedman SQL Server podcast. This is season two, episode two, and I am your host, Steve Stedman, as we go into the second year of doing this, we’ve got a lot of exciting things planned. One thing I want to point out is that for the month of January, we’re offering a deal on our managed service product where we take care of your SQL servers so you don’t have to and with that deal in January, anyone who signs up new gets 12 months for the price of 10 for the first year, or that’s basically a 16% discount for year one.

Also, do you want to be a guest on our podcast? Do you have some SQL Server topic that you want to share with our listeners? Please reach out to my assistant, Shannon at Stedman solutions.com to schedule, or you can go to Stedman solutions.com click on the podcast, and there’s a sign up form there to sign up as a guest.

So I’d like to welcome everyone to this week’s podcast. This week’s topic, what are virtual log files and why do they matter to SQL Server? And I’m joined by Mitchell Glasscock. He’s one of the database health monitor developers, and he’s been with Stedman solutions for under a year now, and he’s been helping add quite a few things into database health monitor in the last few months. Welcome Mitch. Like to also mention that, as he is one of the database health monitor developers, database health monitor is a key tool that we have for a lot of database management things. But it’s also great. We’ve got some new features in the last several months around VLF virtualization and management, and you can download that at databasehelp.com so at this point, we’ll jump into what are vlfs, and why should you care? And we’ve got some slides here from a presentation I did a few years ago that I was able to pull some of the content in and reformat it a little bit.

But to start with, in SQL Server, most people’s database, most databases have a minimum of two files. You can have more, but generally you have a minimum of two and one of them is your data file, and one of them is your log file. And as shown here, the log file is set up. This is just some default settings after creating a new database. And with this, the log file is what keeps track of transactions while they’re running. And depending on whether you’re in full recovery model or simple recovery model, the transaction log behaves differently. And simple recovery model, the transaction log is kept around for the duration. Or what’s written to the transaction log is kept around for the duration of a transaction. Once the transaction completes or is rolled back, what’s in that log can be cleared out and used again. When you have full recovery model, that information stays in that log until the log is backed up, and there’s a few other things that can impact it, like replication and log shifting, things like that. But until the log is backed up, your log file can continue to grow. And what we’re going to look at is how inside of that log file there’s smaller pieces that are called virtual log files. And these are sequential pieces. And it basically starts out, basically when SQL Server allocates new log file space, the files are allocated in those chunks that are called BLS. Now, depending on the version of SQL Server that you’re using, those virtual log files are added in different number of chunks, so meaning prior to SQL Server 2014 and they’ve adjusted this a little bit since SQL Server 2014 I think it’s much smaller numbers now, but prior to 2014 VLS are added. If you have up to or less than 64 megabytes, you get four virtual log files. 64 megabytes to one gig, you get eight. And more than one gig, you get 16. And I think, now, gosh, I should have checked the numbers on the latest version. But I think generally you’re getting between one and four depending on the size, on databases newer than SQL Server, 2014, now why does this matter? So this is just a big file. If you end up with too many vlfs, or a high count of vlfs, that can really slow down your system. Mitch, can you think of some examples of how you might end up with too many vlfs?

Mitchell Glasscock  04:52 You could have, you’re making a lot of changes, and it could just bloat your log file. You’re doing just a lot of. Changes all at once, and your transaction logs not going to roll over in time for it won’t roll over, it won’t have time to roll over, and you’ll just keep sequentially adding them.

Steve Stedman  05:10 So like, a really giant transaction, yep, that’s definitely one. Another one that we see is where something happens, like, for some reason, people turn off their log backup job for a while, and they turn it off for a couple of days. And during that time, the log file grows and grows and grows to a point that’s way bigger than you’d ever need if you’re doing proper backups. So yeah, a couple different scenarios there as to how you might end up with high vlfs. But the problem is not the size of the log, but the VLF size, or the number of vlfs in the VLF size themselves. So generally, what we see is greater than about 500 to 1000 vlfs. You can have serious performance issues. But also, what we detect is we generally, when we’re looking at things, we’re looking at more than 250 and we want to figure out how to figure out how to get it a little bit under control. But also, if your vlfs are too big, that can cause problems too so greater than about a gig in size for an individual VLF can be a problem depending on the overall performance of your system. So the way the log file works is as the log file expands, it just keeps putting these virtual log files on the end of it. And we’re going to have a demo with bench in just a minute to talk about that. But as the file grows, it just puts on these chunks of varying sizes. As it grows, it just keeps putting these on the end of the file. And unless you do something to manually clean them up, they’re just going to the file gets allocated and it doesn’t get cleaned up.

So let’s look at an example here. So if you had file growth with a default, let’s say you just create database and go with the defaults of one megabyte for the log and 10% growth. And let’s say the log needs to grow to three and a half gigs. But what’s going to happen is you’re going to end up initially that 10% growth is you’re going to get a lot of really tiny virtual log files, and then eventually to get to three and a half gigs, it’ll grow out to be about 420 log files, and you get these really varying sizes of the virtual log files. Now the problem with percent growth is as the database or as the files get bigger and bigger and bigger, that percent growth gets bigger and bigger. So it’s a relative setting, and you end up in a position where you’ve got some really kind of mismatched sizes, where one transaction might span 30 vlfs And another one is fully contained in one VLF. So we usually want to clean that up a little bit, and we’ll talk about that in a minute. But let’s look at another example. Let’s say you start big. You start with your log file at two gigs, 2000 megabyte, and you grow at 10% growth. So still the same 10% growth setting, but you started with a much larger file to begin with. Now, if the log file grows that same example of three and a half gigs, well, the VLF count will be about 64 and the VLF sizes will vary a little bit, but they’ll be up 225, megabyte in size. So that’s a little bit more manageable of an example. And then let’s take a look at another scenario here is, and this one we’re looking at, instead of 10% growth, we’re looking at 500 megabyte growth, so half a gig per log file growth. And the log file would be 40 which would be a little bit less, and the BLF sizes would be around 125, megabytes, 250, megabytes, which is a pretty good spot for the sizing there. Now, Mitch, do you want to talk a little bit about what happens if you end up growing your log file too big? What can we do to deal with it if it ends up with like, a real bad set of settings here that it’s growing that’s going to depend on where the if the transaction log is rolled over recently, and we haven’t gotten into it yet.

Mitchell Glasscock  08:56 But as you stated earlier, the the log file creates these VLF sequentially, and if the last VLF is still in use, we kind of just have to wait until that transaction log rolls over and frees that up, which if your last few vlfs are just absolutely massive that can cause a lot of problems waiting for those to actually roll over. So one of the big things that we recommend is getting the settings set up properly. And I was going to want to see, if you want to clarify, what’s the difference between that 10% growth and the 500 megabyte growth.

Steve Stedman  09:42 Oh, yeah, let’s go back and take a look at that. So 10% growth to 500 megabyte growth, well, it’s not that different here, because it’s with 10% you grow to 64 BLS, and with 500 megabytes, you grow to 40 BLS. Yeah. Right? But, and that’s not that big of a difference there, really, overall. But the difference is, is that as the file gets bigger and bigger, that 10% gets bigger and bigger and right as each growth happens, you might have multiple vlfs that are getting created as part of that. So by setting it to a fixed size, like 500 500 megabyte megabytes versus 10% you know, it’s always going to be growing at a reasonable size. Now, one of the problems we run into is, gosh, we had one that had really small growth settings, and they had over 100,000 vlfs inside of their log file. And the problem we ran into was that on Backup and Restore. When we restored that database, that was a database that took us almost an hour to back up just because of the size, and then when we restored it, it took over eight hours to restore the database, and like seven of those hours were spent just recreating all those VLF chunks in that log file. And what we found was that with that same database, we shrunk the log and expanded it to and got it back to the same size, but with fewer VLS, so bigger trunks and fewer VLS, and it took the restore time on that database from over eight hours to be just barely over an hour. So massive time savings means on Restore. And you might say, well, how big of a deal is it if your restore takes a little bit longer? Well, if something catastrophic has happened on your system and your system is completely down, you’ve got your manager and the president of the company and everyone else looking over your shoulder and screaming as to why the site’s not working, and you’re just sitting there twiddling your thumb saying, Well, we’re waiting for the we’re waiting eight hours for this to restore. And you know, you probably don’t know it’s BLS, but if you knew, you’d say, because the log file was set up poorly, that’s a pretty bad spot to be in. And having had to do emergency restores for clients, we like to build them as quickly as we can and not have to have that kind of an eight hour leg on the Restore. We’re going to talk a little bit about how database health monitor gives you some cool things to deal with the log files. But first, the old way, let’s say you hadn’t heard of database health monitor, or you don’t have access to it. Well, you can go check the BLF count with DBCC login info, which is an internal command that will tell you how many files there are, how big they are, and all that, and it just gives you like a table display. And then you can go adjust the settings on your database file, or just the settings on SQL Server for that database, for the growth. Then you go shrink it as small as possible, and do this off hours. But you also need to make sure that that last one, like Mitch mentioned, the last BLF, is not in use, or it won’t shrink past that, and then you shrink it as small as possible, grow it back to the size you need, and that might be close to the same size it was originally, but check the auto growth settings and make sure that your BLF count is coming out less with DVCC login info. After doing this, if you don’t adjust your auto real settings, you may end up with just as bad of a scenario depending on how you grow the file. But this is where we say there’s a better way. The better way used to be a script that I had on my website at Stevestedman.com where you could download and run this T SQL to sort of visualize in this character oriented chart what the log file looked like. But after using that for a few years and realizing, hey, we’ve got Mitch working on database health monitor, maybe he could put it in as a cool new report. So let’s take a look at database, database health monitor, and go ahead and explain the report what we’re seeing here.

Mitchell Glasscock  13:46 all right, so as Steve said, we’ve been using just a script to visualize this, but we’ve now put it in a report. And what this does is it creates a visual visualization of all the vlfs in the sequential order that they are, and it shows you which ones are in use and which ones aren’t currently in use. So as we see here, this isn’t a great database. The settings aren’t set up properly. It’s set up for 10% growth. The database is pretty small. Doesn’t have a lot of transactions on it, so we won’t see a lot of use on this, but it’s still great for showing what this report can do.

Steve Stedman  14:25 oh, go ahead. This database is a really good, not a great setup, but a really good example of a bad setup that is pretty common in what we see, right?

Mitchell Glasscock  14:34 So that that 10% use, or the 10% growth, do we usually want to avoid using that setting that’s generally what we have. We want to avoid in that. And this is how I currently have this one set up. Yeah.

Steve Stedman  14:47 And the general tip that we have, and we do this whenever we’re doing managed services or performance assessment or any that kind of stuff, is that we never, and this is kind of a best practice as well as to never use percent growth on any of your database files on. Always use some fixed size that’s going to be appropriate for your database when it starts out. And it might be that if your database starts out at like 10 megabyte, and then next month it’s at two gigabytes or two terabytes, you might have to adjust those settings. But still, we want to do a fixed number rather than a percentage growth there, right?

Mitchell Glasscock  15:17 And even if you had it set to a small percent or not a small fixed number, and not a percentage growth. Even if you had a lot of transactions going on, it’s still going to grow in a fixed amount, and not like this percentage exponential growth that you might see if you get a lot of transactions going on.

Steve Stedman  15:37 absolutely and so what is the meaning here of the blue versus the green on the bars.

Mitchell Glasscock  15:44 So as we can see, one through 18, and sequentially, one through 18, they’re all in use. This was rolled over recently, so the first 18 VLS are currently in use, and then 19, all the way down to 50 are currently not in use and any transaction that might happen before the next backup is going to start filling in all these vlfs.

Steve Stedman  16:16 Yeah, I did mention it that you can’t shrink past the in use ones because it’s a sequential file. If you had a whole bunch of them that were not in use, followed by one that was in use, that one that was in use would block you from being able to shrink anything smaller than that one in use, right? And then you have to wait until the next rollover, yeah, until the next log backup, or until the transaction completes, depending on whether you’re in a simple or full recovery model, things like that. So then how big overall is this entire log file that we’re looking at here?

Mitchell Glasscock  16:45 So we’re looking at 2211 megabytes. So not a very big one on the back end, but we see here that it takes in the it accounts for all the ones that are currently not in use, and the last in use one, and it shows us that we can shrink this by 900 megabytes right now just by clicking the shrink File button.

Steve Stedman  17:09 Okay, so let’s assume that instead of two gigs here on our full file and 32 vlfs, let’s assume this was like 20 gigs and 3000 vlfs, you’d be able to look and see all the green ones at the end that are not in use, and then show us how hard it is to or how easy it is to shrink those.

Mitchell Glasscock  17:32 If we want to shrink them, it’s a pretty complicated operation. Here you just simply hover over and click shrink file, and it’s done.

Steve Stedman  17:38 Okay. Now this is a small database, and actual mileage may vary on how fast this runs. If you’re on a right system with a really big file and slow IO, that may take a little bit longer, but the time to click the button isn’t any longer. It’s the time it’s ready to shrink. So now that we’ve shrunk it, and what is our current size is how much

Mitchell Glasscock  18:01 we’re down to 903 megabytes.

Steve Stedman  18:03 Okay, now we know it was about 2200 megabytes. The reason it was at 2200 megabytes is probably because there were enough transactions going on that at some point the file had to grow to that size. So shrinking, if we just shrink this, and we’ve got something that happens every day or once a week, or something that’s going to grow it back out to that 2200 megabyte size. Well, that’s going to happen. It might mess up our log files here, our vlfs Again, if we don’t have the proper growth settings. But can we just set it back right now to 2200

Mitchell Glasscock  18:40 Yeah, so we can easily expand this all the way back out to 2200 and this is where you would want to go in and readjust your growth settings to what you need for your database, so that you’re not getting way too many VLFs under that next transaction, yeah. Next few transactions, yeah,

Steve Stedman  19:01 and that’s the key here. Is that if we’re doing the manual expand here, we’re expanding it. It’s not related at all to the auto growth settings. But if we were not doing the manual expand here, and we were letting it just grow automatically, that’s where those growth settings come into play. All right, so right now we’ve gone for somewhere around 30 vlfs, down to around 16 vlfs. Now it’s possible, in the made up scenario, if we had like 3000 vlfs To start with, that after doing the shrink and the grow, we could have still ended up with less than 50 vlfs instead of 3000 if it was a much larger file we were dealing with, right,

Mitchell Glasscock  19:39 And even doing this in the database health application, you still want to do it outside of normal operating hours, so that you’re not running into those issues where you’re stomping on IO and all of that.

Steve Stedman  19:53 Now with this, let’s say you had 3000 blog files showing up here, and. The very last one was in use, and you hit shrink, and it couldn’t shrink anything off of it. What are you going to do at that point?

Mitchell Glasscock  20:06 At that point, you just need to wait. You can either manually do a transaction rollover or just wait until the next backup.

Steve Stedman  20:15 And depending on the activity on that database, the when you say wait till the next transaction log back up, well, it would be waiting until the current log in use gets used up and then backed up. Right? Sometimes I tell people, well, this is a really slow database, just let it sit and come back tomorrow and give it another try. Or one of the things that I’ll do if I really want to shrink it right now and that last one’s in use is I’ll go kick off, and assuming it’s maintenance time where we can put some more load on it, I’ll go kick off something that’s going to cause a bunch of log rights. For instance, index and statistics rebuilding on a database. It’s a good practice, and it’s a great way to turn through some log files, if you have big enough data there, in order to I mean, as you do the index rebuilding and statistics updates and stuff like that, all that gets rid of the log. And that’s a good way to artificially just pump some stuff into the log to chew up the current log file that’s in use. Then run a log file backup, and then it should roll over and be back to the beginning of the log file, unless you shrink at that point, right? It’s one of those things that you know, if your VLF count is like, in the few 100 or your log file is like, I don’t know, log file overall size can be kind of a relative thing, but let’s say you’ve got a 300 gig database file, and your log file is grown to 500 gigs, well, on bigger databases, that’s A whole lot of log file space that’s going on there. That probably is indication, it’s an indication something wasn’t working, right. So the other thing we can look at related to this is we have the historic file size over time report. I was just thinking as we talked through this, if we’re looking that would be under your database name and expand size over time. So let’s say we shrink this VLF, or shrink the long file, and then get the VLF count good, and we can come in here and see that. Well, the VLF file is now at this point, and this only updates, I think, a couple times an hour. So we’ll take a minute to see the update here, but it might be that two weeks later, you go back and look and it grows back to some really big size that you weren’t expecting. Well, you can use this chart to go and figure out from this when your log file grew, and else what specific time it was growing. And you may be able to go and look at like what jobs are scheduled at that point, or is that happening after hours? Or is that happening in the middle of the day when you have the most load in the system, or whatever it may be, and you can use that time of when the file actually grows to correlate it with maybe some event that caused it to grow and to figure out, well, gee, maybe you’ve got some query running. And we see this a lot with like data migrations, or people are trying to pump data into a reporting database where they bring in, like big table, and they dump the whole thing and replace it every single day. That’s one of those things that doing too much inside of a single transaction can cause the file to really blow down. Now it might be necessary. You need it to be that big, but there might be other ways to do it if you’re constrained on space. Now, let’s go back to the PLF report here, and I’m going to say I wish you’d had this done about three weeks before it was actually done, because in that final three weeks that we were testing this, there were, gosh, I think, three different clients that I worked with where they had some event happen that bloated out a lot of their log files and chewed up all their drive space, causing crashing because they had no more disk space on the drive, so we had to manually go in and shrink a number of their log files, because they what had happened is their log backups had failed and been turned off for a few days, and when they turned them back on, they then had all these over bloated logs, and we had to go in and manually doing it the old way, using shrink file, go in and clean them up and something like this would be really, really handy to be able to go find those. Now another thing with this is if you click on your server name up top and then go to the instance reports, and click on the disk utilization report.

Mitchell Glasscock  24:26 See, don’t have that one pulled up or sorry, file utilization.

Steve Stedman  24:28 It’s this in the bottom there. File utilization. Okay, now if we sort this by free space and megabytes, and sort of the other way. So we get the big ones at top. One of the things I’ll do is I’ll go look and see, do we have log files in here, if we’re concerned about disk space on our log drive, for instance, do we have log files in here that have a ton of free space in it? And if so, then I want to look into why we have that. Now, this is your desktop. It’s. A test server is not very indicative of a real server, right, right? But sometimes you’ll go and look like I was working with one client who found like, 400 gigs in one of their log files because they had run a really big transaction and or update with the transaction and left the transaction open for too long, so, and it just really bloated out their files. So that was one I was able to go and see we could clean up hundreds of gigabytes by looking at which file it was then going to, the VLF report and shrinking it. Okay, so anything else we want to show and did the self monitor,

Mitchell Glasscock  25:34 You mentioned that we had some clients that needed that had their VLS or their log files that grew way too big and it was causing crashing. One of the things is, just because we shrink the log file doesn’t mean that we’ve solved the problem, though, right?

Steve Stedman  25:54 Well, right, right. So the real question is, why did those files grow and in the one client that they actually were running out of disk space, and it was crashing their SQL server because they had no more free space in that disk. It was because there they had weekly full backups, and they had log file backups in between, and their log file backups Job had been disabled accidentally or temporarily that somebody forgot to turn it back on, and it went for like, a couple of weeks without log file backups. The logs just continue to grow and grow. They didn’t have any monitoring, like database health monitor that will alert us on big log files or no recent backups and those kind of things. And it grew and grew and grew to the point that they ran out of disk space, and that was the problem, or the cause of it was the backups weren’t running. Now, this thing that made it worse was that, yeah, backups weren’t running, but nobody knew about it, because they didn’t have any alerting. And the only alerting they got was from the people using the system when their transactions were failing because the drive was out of space, right?

Mitchell Glasscock  27:03 So in database health monitor, we have redundancies that do that alerting. So even if you do shrinking and you think that you’ve handled the problem, but maybe you missed, like you said, the log file backup, we have that in database health monitor to alert on, yeah.

Steve Stedman  27:24 So there’s, there’s two places that we generally cover that one is in the quick scan report database health monitor, where we can go and see a bunch of the really common alerts and common things that pop up, common warnings that we’ve caught over the years. And then the second way we get that is with our managed services or our daily monitoring product, we have it so it takes a lot of those things that are in the quick scan report and reports on them to a central server that just says, here’s the issue, and then from there, we have email alerting that either goes to our team or to the client’s team or both, let them know that there’s something here that needs attention. And for those things, like you’re running out of disk space and there’s been no backups recently. Those are what we call urgent issues, where it sends an urgent issue alert to our team and someone on our team, well, then it keeps sending it every 20 minutes until someone on our team deals with it. But it’s one of those things. When we see those urgent issues emails, we need to generally drop whatever we’re doing and go focus on getting those fixed as quickly as possible. And when those go to a team of three or four of us, and they go to three or four people at the client site, then generally, it’s something that’s going to get taken care of really quickly, just because people don’t want to get keep getting the email where low disk space or no backups is something that can go totally unnoticed for weeks, and it has without that kind of alerting. Okay. Well, do, is there anything else you want to show in the report here?

Mitchell Glasscock  28:47 Or should we jump back to the presentation that covers it for the VLF report?

Steve Stedman  28:51 Okay, so let’s so just to kind of summarize a few things here on this. Well, I guess before we go into the summary, do you have anything else you want to add, or any topics you want to talk about here, or any questions.

Mitchell Glasscock  29:03 I don’t think so. I think we went into a pretty good depth on vlfs and what they are, and a good generalization of the whole topic there.

Steve Stedman  29:11 So then the thing just kind of summarizes what we’ve covered here, is that the vlfs, or virtual log files, that are all parts of the transaction log file. They’re basically sequential chunks that get laid out in that file that can then be used to store your transactions. They as they’re being used. And that varies depending on whether you’re in full or simple recovery model. In full, they stay there until the transaction logs backed up. And in simple recovery model, they stay there until the transaction completes too high of a VLF count can negatively impact performance, and that’s specifically around Backup and Restore, but we have seen that it can do transaction performance, not to the extent that we saw in the backup, but that we generally recommend reducing these in order to help with performance. Overall, too big of a VL. That can negatively impact performance. So let’s say you’re going to do four gigs of log file, and that got added as one PLF. Well, that would be kind of bad, because it’s going to go beyond that, like one gig threshold. So generally, what we like to do is when we added, if we’re adding more than a couple of gigs at a time. Well, if we’re adding more than a couple of gigs, we usually add it in smaller chunks to keep it from going over a gigabyte at a time. And that’s where we get sort of this Goldilocks theory, where we just need it just right as large as possible, but keeping the VLF count under a few 100, and keeping the total size of the VLF under around a gigabyte to and on newer versions of SQL Server, two gigabytes might be reasonable as well. So just trying to keep that and then adjust as necessary as your system grows. And that’s one of those things where, gosh, I’ve seen it just in the time we’ve had the VLF report out in the last couple months, where you look at it and then something changes. Databases had a lot of growth something, and we need to go adjust it, look at it again, just because we’ve had that growth,

Mitchell Glasscock  31:08 Right? So one last question I guess I have for this is why we say that we want that to keep the VLF account low, and we’ve noticed anything from 500 to 1000 plus vlfs can really negatively impact that performance. But why is that? Why is it between 500 to 1000 plus?

Steve Stedman  31:30 Well, it’s just kind of a threshold that would found over time. I mean, I’ve never seen anyone have any issues when they’ve got less than 250 vlfs. I have started to see that it starts to add time to the restore with the more you have. And we’ve seen that things like 100,000 vlfs, absolutely can cause massive problems on like we talked about on that one eight hour database restore. But where that threshold is, it’s a little bit loose, depending on your system. I mean, if you’re on a system that has massively fast IO you might not notice it up until around 1000 vlfs. But if you’re on a little bit slower system, you might be seeing it around two or 300 vlfs Being an issue. So we’ve kind of set the threshold there just to be safe as somewhere around 250 but if someone’s got needs big log files, they’ve got 300 vlfs. I’m not going to panic over that, but if they’ve got, if they need big log files, they’ve got 10,000 vlfs. Well, that’s worth spending some time to focus on it and get cleaned up and get those virtual log files down to a more manageable level there. Right size of the log is not the issue. It’s how many of those smaller chunks inside need to be managed. All right. Well, I guess then at this point, that wraps it up. Thanks for watching this week’s Stedman’s SQL Server podcast, and next week, episode three of season two. We’re going to have another guest next week, which is one of our partners, and we’re going to be talking about some things around cyber security and what happens if you don’t do it well. And thanks for tuning in. And if you want to be a guest, you can always reach out to us, like I mentioned before, and invite you can visit us at Stedman solutions and click on the podcast, or you can go to my YouTube channel and find the podcast there. It’s available in a number of locations. So thanks for watching. Thanks for joining me this week. Mitch, thank you. Have a great day. You Steve, thanks for watching our video. I’m Steve, and I hope you’ve enjoyed this. Please click the thumbs up if you liked it. And if you want more information, more videos like this, click the subscribe button and hit the bell icon so that you can get notified of future videos that we create so.

Getting Help from Steve and the Stedman Solutions Team
We are ready to help. Steve and the team at Stedman Solutions are here to help with your SQL Server needs. Get help today by contacting Stedman Solutions through the free 30 minute consultation form.

Contact Info for Stedman Solutions, LLC. --- PO Box 3175, Ferndale WA 98248, Phone: (360)610-7833
Our Privacy Policy