top of page

Are Your Database Sizes Growing Out of Control?

This post comes with some very clear health warnings: do not try this at home – well, at least not all of it. The goal here is to help you understand whether you have a storage problem, not to encourage dangerous tinkering in production databases.

I’m sure most of you have a DBA in-house who quietly takes care of this stuff for you. If you don’t, feel free to get in touch – we might be able to help.


Does Database Growth Matter?

If you’re hosted by Civica, you might be tempted to shrug and say: “Who cares about storage?” After all, disk space is relatively cheap.

But here’s the rub: if your nightly Extract Transform Load (ETL) jobs are taking too long, you should care. A bloated database doesn’t just eat storage – it eats time, network bandwidth, and opportunity. ETL jobs are there to take your production systems, copy them over in to a nice pot you can report from.


Most environments typically include:


  • CxLive

  • CxTest

  • CxTrain

  • CxWarehouse


So let’s say your Live environment is 300 GB:

  • The backup is 300 GB.

  • That’s restored to another server – another 300 GB.

  • Then restored nightly into CxWarehouse – another 300 GB.


That’s not just storage. It’s also traffic across your network and extended ETL processing time.


A nightly ETL should be completing in about an hour, not three. Once you creep into the 3–4 hour window, you lose the flexibility to do other things, like refreshing your Power BI semantic models before the business day starts.

And don’t forget: every one of those servers and databases is also backed up.


So the total storage impact of a single 300 GB Live database quickly multiplies.

Now, imagine if you could shrink that database by just 100 GB in Live – the savings cascade across every environment and every backup.


Where to Start: Find the Big Offenders

Here’s a useful SQL query to list all tables and their sizes in descending order:

select * 
from (
	SELECT
		s.Name AS SchemaName,
		t.Name AS TableName,
		p.rows AS RowCounts,
		CAST(ROUND((SUM(a.used_pages) / 128.00), 2) AS NUMERIC(36, 2)) AS Used_MB,
		CAST(ROUND((SUM(a.total_pages) - SUM(a.used_pages)) / 128.00, 2) AS NUMERIC(36, 2)) AS Unused_MB,
		CAST(ROUND((SUM(a.total_pages) / 128.00), 2) AS NUMERIC(36, 2)) AS Total_MB
	FROM sys.tables t
	INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
	INNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
	INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
	INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
	----uncomment this to see tables that were likely created as a backup
	--WHERE 
	--	(CHARINDEX('_', t.name) > 0 OR CHARINDEX('backup', t.name) > 0) 
	--	and t.name not like '%_user'	
	--	and t.name not like '%_Audit'
	GROUP BY t.Name, s.Name, p.Rows
) x
order by x.Total_MB desc

Run this and you’ll quickly see the usual suspects:


  • SecurityLoginAudit – logs who logged in, when, IP address, browser. I recently saw one taking up 40 GB stretching back to 2017.

  • CxLog – every error and info message ever logged. Depending on site settings, this can vary wildly.

  • Old backup tables – e.g. RentTransaction_Backup_JW_250910. Sometimes necessary to keep, but they add up and should be deleted over time. Not least because Civica might create a table called that in future and your next upgrade will break!


Copying Live to Test: Easy Wins

I strongly advise sites to refresh Test from Live at least once a year – and definitely before any new project. But remember:

  • Test does not need to mirror Live exactly.

  • You should already be anonymising Test to avoid things like accidentally sending live SMS or emails.

  • Beyond that, you can safely drop certain bulky tables that aren’t needed for UAT.


Some real-world examples I’ve seen:


  • CommunicationPDFRendition – 120 GB of a 350GB db. One client with just 9,000 assets had 270 GB in this table after little over a year! They're on track for a Terabyte DB by year 3! This table holds a pdf version of every comm ever sent. Now you obviously can't delete these but maybe you need an EDM....fast!

  • CommunicationEDMXML – 80 GB This one stored the metadata of the above.

  • SecurityLoginAudit – 40 GB - Ever wondered what browser Brian from Accounts used when he logged in on the 1st November 2018 at 9:15am? No, me neither, but if you did, you'd find it in here!

  • SystemAttachment – 5 GB - Again, you obviously cant delete this but should you be thinking about separating out documents from the transactional db?


So what do you do?


You definitely can't scurry in to Management Studio and delete * from everything. Some tables simply need to grow. Others, like SecurityLoginAudit have no right to be hogging all that space in the main db. If you do want to keep it, create a new db called Archive. move the rows out so you've got one copy, delete them from Cx, and enjoy quicker backups, shorter overnight ETLs, avoid the look on the infrastructure guys face when you ask him to up your Data drive by another 50Gb for the second time this year.


If you think about Testing however, you definitely don't need all that data in your CxTest database. I worked with one site whose live DB had grown to 180gb in live, and after we do a Live to Test copy, we strategically delete from about 5 tables to the point that CxLive and CxTrain are merely 10gb each. Even the most thorough UAT testing can be carried out without lugging around a PDF rendition of every letter ever sent.


A Very Clear Warning

Let me be absolutely clear: do not just start deleting rows from tables.


I learned this the hard way early in my career. Thankfully, I had a very understanding support team leader who helped me restore what I’d broken – but that kind of safety net isn’t always there.


If you’re considering any clean-up:

  • Talk to Civica.

  • Talk to me.

  • Talk to someone who knows the application and its database inside-out.

  • Raise a support call and get formal advice.


If you’re hosted, you probably can’t make these changes anyway. ETL run times and application performance are the main areas of concern, not diskspace per se. If you’re on-prem or in your own cloud, you might have some leeway – but proceed with extreme caution.


Wrapping Up

Database growth is about more than disk space – it impacts performance, ETL windows, reporting, and backup strategies. By identifying the biggest tables, making sensible choices about what really needs to be copied into Test or Warehouse, and seeking expert advice before trimming, you can claw back significant efficiency gains.


And remember: just because you can delete something doesn’t mean you should.


Comments


© 2025 by iStride Ltd.

bottom of page