This site is a participant in affiliate programs for Amazon Services LLC, Adorama, and OWC.

Making DeNoise AI Faster

In Blog, Photo Taco Podcast by Jeff Harmon6 Comments

Right up front I want to set some expectations on the speed of DeNoise AI. Even on the best of computer hardware here in early 2022 DeNoise AI is not “fast”. The fastest I was able to get raw images to be processed in DeNoise AI was about 6 seconds per image. 6 real seconds, not the amount of time DeNoise AI reports as it is processing.

6 seconds per image is fine when a photographer is taking an image or two through the software, but this is not practical for running thousands of images through DeNoise AI. The story gets even worse if you have an older computer or one that doesn’t have a GPU where the times can get up to more like 25 seconds or more per image.

DeNoise AI runs fastest with a discrete graphics card (GPU) where GPU cores matters and RAM on the GPU doesn’t. CPU also matters, but not as much as a GPU. Computer memory (RAM) doesn’t matter above 32GB on a PC. M1 Mac computers run DeNoise as well as PCs with big GPU and only need 16GB of unified memory. There is little difference between M1 Max and M1 Pro for running DeNoise AI.

What is DeNoise AI?

If you already know what DeNoise AI is you can skip this section. DeNoise AI is an application built by Topaz Labs that runs on both PC and Mac.  DeNoise AI specializes in applying noise reduction to photos. Specifically it is software that uses AI models to analyze your photos and automatically figure out how to best reduce the noise while giving up the least amount of sharpness.

Sounds great right? Most photographers run into shooting scenarios where there just isn’t enough light and we end up with noise in our images. At the same time I hear from a lot of photographers who really struggle with knowing how to use the noise reduction and sharpening tools available in both Lightroom and Photoshop so that automatic part may be the very most appealing thing about the software.

Is DeNoise AI really able to do automatic noise reduction or is it a sales pitch that comes up short on execution? I don’t want to spend a lot of time on that here because I have an article called Topaz DeNoise AI vs Lightroom and Photoshop where I share details of my experience with the application.  I want to focus here on how to get the best performance out of the DeNoise AI application, but my short answer is yes it does a very good job of automatically dealing with noise in photos without losing sharpness.

How Can Photographers Make DeNoise AI Run Fast?

As photographers read about DeNoise AI software, trying to decide if they want to invest in the software, the question I get most from that DeNoise AI vs Lightroom article is if it will work on their computer.  Photographers read something about a graphics card or GPU being needed by the software.  They don’t know if their computer has what is needed for DeNoise AI to work for them.

The good news is that Topaz Labs offers a trial version of the software so that any photographer can download it and give it a really good try before deciding.  That’s great, but if a photographer is sold on the software being valuable to them in their workflow, they still want to know what computer hardware will make the software run the best.

To answer that question, I ran more than 20 hours of real world tests.

Real World Testing Setup

As is always the case with Photo Taco, running a handful of photos through the software to draw a conclusion isn’t good enough for me.  I wanted to shape my testing around the real world use case of how photographers are most likely to use this software, so let’s run through my test setup.

Raw Images

Topaz Labs recommends the way to get the best results is to feed the raw file through DeNoise AI.  The raw file that came right off your memory card, not a DNG or TIF converted version of the raw even though data isn’t lost using those file formats.

Here is specifically what Topaz Labs said about using raw files in their The future of noise reduction is RAW blog post:

“Our RAW model uses all of that rich sensor data to provide results that are cleaner than anything else out there, even our existing models! We also made serious improvements to our DNG output support, so you’ll still be able to edit your saved image files with the same precision as your RAW files.”

I knew my testing needed to be using original raw files, but I found early on in my testing that I got better results across different types of raw files using the “Severe Noise” AI model over the “RAW” AI model.  All of the testing was with raw and the Severe Noise AI model over the RAW AI model.

300 Images Per Test

Obviously, DeNoise AI is of most value to photographers who face low light shooting situations more often than they don’t.  To me that means wedding/event and sports photographers were the use case I should test the software against.  Topaz Labs says that the software is most effective in applying noise reduction when you feed the raw file to the software, which means doing this before Lightroom and Photoshop.  That also means 1,000 to 3,000 images from a shoot.  I thought I would use 2,000 images per test.

However, it became obvious pretty quickly that 2,000 images per test was not going to work.  When putting 2,000 images through the software on a couple of machines I own, the math said I would have to spend several hundred hours in testing to get the results I needed. I would be willing to do that if I thought that was something photographers would actually do, but there is no way any photographer is going to run 2,000 images through DeNoise AI before they start working on editing them. No matter what computer a photographer has, it just takes too much time.

I decided that I needed to scope down the number of images in each test to 300.  That is about the average number of images I cull down to after an event or sports shoot and more importantly early testing showed it took a time I think photographers might actually be willing to wait to get cleaner images.

Reboot Before, Shutdown Everything Else

Even though photographers are very likely to be doing other things on their computer while DeNoise AI is processing their images, if other software is running or the computer is being used for other things at the same time it would be next to impossible to actually get meaningful metrics about how the computer is being used by DeNoise AI.

The best way to make sure I have metrics about how DeNoise AI is using the hardware in the computer is to reboot prior to each test and then close down anything that may be running on the computer.  I did this with every test I ran and gave specific instructions to a few others who helped me do the same testing on the computers they have.

Gather Metrics Every 5 Seconds

Having done a lot of software testing in the past, I knew that only having the time it took for DeNoise AI to process through the 300 images wasn’t going to be enough information to draw conclusions.  I needed to gather metrics every few seconds to see how DeNoise AI is using the hardware of the computer.  I needed to gather metrics about how CPU, GPU, RAM, and storage was all being used.

One of the challenges there is getting the same metrics across PC and Mac. For Mac I used iStat Menus 6 from Bjango.  It is a very light weight utility that puts metrics in your menubar on the Mac, but even better is that it gathers and stores those metrics in a sqllite database! I copied the database off to a new file as soon as the test was done and then wrote a query that could get me the average and max values for all of those metrics throughout the test.

For PC I wrote a Powershell script that would pull the metrics from counters and write them to a CSV file every 5 seconds. When the test was done I would kill the script and then I opened the CSV in Excel and could do the same average and max calculations on the same metrics.

DeNoise AI Version 3.4.2

Another critical piece to all of this testing, especially when I outsourced some of the testing to listeners who have access to computers I do not, is the version.  I started doing these tests when the most current version of DeNoise AI was 3.4.2. 

At the tail end of testing DeNoise AI released version 3.5.  Unless otherwise stated in my testing results, it was all done using version 3.4.2 with the testing taking place in December 2021 through January 2022.

Does a GPU Make DeNoise AI Faster?

I was tempted to get into the details of what a GPU is here, but I don’t think that is actually important for photographers to understand so we will skip it.  The question is do photographers need to make sure they have a GPU (graphics card) in their computer?

Topaz Labs says a powerful GPU is needed to run any of their software well.  Is that actually true, and what kind of a performance difference are photographers looking at if their computer doesn’t have a powerful GPU?

Let’s walk through some testing results.  First up the worst case that turns out to be the very best argument for having a powerful GPU in your computer to make DeNoise AI run best.  Let’s talk about the testing I ran on the PC I custom built back in 2014 specific for doing photo editing.  The PC has a fourth generation Intel Core i7 4770K CPU, 32GB of RAM, and a GTX 1060 GPU with 3GB of VRAM.

If those specs don’t mean much to you, the CPU is over 8 years old, but the GPU was upgraded in 2017 and is only a little over 4 years old.  Here are the numbers that came out in the test:

Computer HardwareRun Time% SlowerSec / Image
Intel 4770K, 32GB, GTX 1060 3GB (GPU)0:57:480.00%12
Intel 4770K, 32GB, GTX 1060 3GB (CPU)4:15:50-342.62%51
GPU clearly makes DeNoise AI faster with older computers

The GPU made a massive difference in the performance of DeNoise AI!  Though we really shouldn’t draw conclusions using a single data point.  Plus, this PC is so old most photographers probably have a computer that is newer.  Does the value of a GPU hold true with a new computer?

Here are the numbers from a PC that is only 2 years old.  A 10th generation Intel NUC that has the Core i7-10710U Processor, 64GB of RAM, but no real GPU.  It has only the Intel UHD graphics integrated with the CPU:

Computer HardwareRun Time% SlowerSec / Image
Intel NUC Core i7, 64GB, Intel UHD (GPU)2:03:410.00%25
Intel NUC Core i7, 64GB, Intel UHD (CPU)3:58:39-92.95%48
Even a weak GPU makes DeNoise AI faster

Even with a weak integrated GPU that is a significant improvement in performance.  But that isn’t really taking advantage of the best hardware that is available here in 2022.  Here is what happened with a much more current PC.  A 10th generation Core i9 10900K with 32GB of RAM, and an RTX 2080 GPU with 8GB of VRAM:

Computer HardwareRun Time% SlowerSec / Image
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:120%6
Intel 10900K, 32GB, RTX 2080 8GB (GPU)1:27:00-178.85%17
GPU makes DeNoise AI run faster on newer computers

Tests on 3 computers with dated hardware and newer hardware all show significant improvement in the performance of DeNoise AI when using a GPU for processing.  Without a doubt, GPU makes DeNoise AI faster.

This is all data from PC testing.  What about the new M1 processors from Apple in their newest Macs?  Don’t worry, I have a lot of testing of M1 Macs to share, but I don’t want to leave this topic about GPU just yet.  We need to talk about the amount of memory and cores on the video card first.

Does More GPU Memory Make DeNoise AI Faster?

We can see clearly from the data that GPU processing produces much shorter processing times in DeNoise AI over CPU processing.  What about the amount of RAM on the graphics card, called VRAM?  Let’s go over the GPU utilization metrics that were gathered in the tests:

Computer HardwareRun Time% SlowerGPU Memory (%)GPU Processor (%)
AvgMaxAvgMax
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:120.00%38553650
Intel 9700K,64GB,GTX 1660 Ti 6GB (GPU)0:45:15-45.03%284350100
Intel 4770K,32GB,GTX 1060 3GB (GPU)0:57:48-85.26%536050100
AMD FX-8350,32GB,RTX 2060 6GB (GPU)1:18:36-151.92%305527100
Intel NUC Core i7,64GB,Intel UHD (GPU)2:03:41-296.42%37407595
GPU memory doesn’t make DeNoise AI faster

The data shows that DeNoise AI never used all of the memory that was available.  Across all the PCs that were tested, DeNoise AI never used more than 4GB of VRAM even when there was as much as 8GB available.  Even on a PC that only had 3GB available, DeNoise AI still only usd 60% of that memory.

Yes, there were processing time differences across these machines, some pretty significant differences.  Unfortunately, we don’t have the testing data where the only difference between machines was the VRAM available.  We can’t tell for sure if the amount of VRAM makes a difference in the performance of DeNoise AI.  Still, any time testing shows a resource is never used 100% it is a strong indicator that it is not something being leveraged fully and therefore not affecting performance. 

Because the GPU memory was never 100% used in any of these tests, we can be pretty sure that GPU memory (VRAM) does not make DeNoise AI significantly faster.

Does GPU Cores Make DeNoise AI Faster?

Now let’s consider the other GPU metric, GPU processor utilization:

Computer HardwareRun Time% SlowerSec / ImageGPU Memory (%)GPU Processor (%)
AvgMaxAvgMax
Intel 10900K, 32GB, RTX 2080 GB (GPU)0:31:120%638553650
Intel 9700K, 64GB, GTX 1660 Ti 6GB (GPU)0:45:15-45%9284350100
Intel 4770K, 32GB, GTX 1060 3GB (GPU)0:57:48-85%12536050100
AMD FX-8350, 32GB, RTX 2060 6GB (GPU)1:18:36-152%16305527100
Intel NUC Core i7, 64GB, Intel UHD (GPU)2:03:41-296%2537407595
More GPU cores makes DeNoise AI faster

Most of these PCs had moments where 100% of the GPU cores were being used.  Any time there is hardware reaching 100% utilization we have a strong indicator that this is a resource that can be the bottleneck in a system preventing the software from running faster.  

Though the average was only about 50%, meaning the GPU cores were fully utilized only during portions of the test, in this case about half of the time. As I watched these tests run over and over I think the reason for this is that there is a lot of work still being done by the CPU.  We will cover the impact of CPU in a moment, but it seems that DeNoise AI still needs CPU work about half the time and while that work is going on the GPU is not being used.

For most of these PCs the video memory was never fully used yet in nearly all of the testing the GPU processing was 100% utilized during parts of the processing.  The data shows us that the number of GPU cores is more important for DeNoise AI processing than video memory.

Should photographers consider upgrading the video card in their computer to make DeNoise AI faster?  Again, with no tests where the only difference was the GPU, I don’t have the data to quantify this directly.  The closest I have is some relative analysis between a current PC vs my older 2014 PC:

Computer HardwareRun Time% SlowerGPU Memory (%)GPU Processor (%)
AvgMaxAvgMax
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:120%29553650
Intel 4770K, 32GB, GTX 1060 3GB (GPU)0:57:48-85%536050100
Difference with GPU processing is 85%

A pretty significant difference for sure, but there are too many differences between these two computers to say how much of this is the faster GPU vs faster CPU and faster RAM.  The data can’t tell us how much performance was gained from the higher GPU core count, but we can get some insight with some relative analysis using CPU only processing was between these two machines (DeNoise AI lets you run the software using CPU only and no GPU):

Computer HardwareRun Time% SlowerGPU Memory (%)GPU Processor (%)
AvgMaxAvgMax
Intel 10900K, 32GB, RTX 2080 8GB (CPU)1:27:000%152824
Intel 4770K, 32GB, GTX 1060 3GB (CPU)4:15:50-194%0000
Difference with CPU only processing is much worse

Wow.  This is a testament to just how much better performance the 10th generation Core i9 gets over the 4th Generation Core i7.  Huge difference.  Much bigger than the difference when the processing was primarily being done on the GPU.  

Based on the relative analysis it looks like upgrading a graphics card in a PC is NOT likely to make DeNoise AI run significantly faster.  If your current PC doesn’t have a discrete graphics card then adding one would be a massive benefit but upgrading probably won’t make DeNoise AI noticeably faster.

Do GPU Cores In Apple M1 Run DeNoise AI Fast?

What about the new Macs Apple has released that use their own system on a chip Apple Silicon M1 processors?  After all, there are GPU cores in those processors and the GPU built into the Intel Core i7 processor did not run DeNoise AI well.  How does Apple M1 run DeNoise AI in comparison to PCs that have powerful video cards in them?  

Thanks to some help from listeners who did some testing on their computers I have the answer of not only how the M1 Macs fared, but how M1 compares to a modern PC running DeNoise AI:

Computer HardwareRun Time% SlowerSec / Image
16″ M1 Max, 64GB (M1)0:30:260%6
14″ M1 Max, 32GB (M1)0:30:35-0.5%6
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:12-3%6
14″ M1 Pro, 16GB (M1)0:31:39-4%6
13″ M1, 16GB (M1)0:39:06-28%8
Intel 9700K, 64GB, TGX 1660 Ti, 6GB (GPU)0:45.15-499
Intel 4770K, 32GB, GTX 1060 3GB (GPU)0:57:48-90%12
AMD FX-8350, 32GB, RTX 2060 6GB (GPU)1:18:36-158%16
Intel 10900K, 32GB, RTX 2080 8GB (CPU)1:27:00-186%17
Intel NUC Core i7, 64GB, Intel UHD (GPU)2:03:41-306%25
Intel NUC Core i7, 64GB, Intel UHD (CPU)3:58:39-684%48
Intel 4770K 32GB, GTX 1060 3GB (CPU)4:15:50-741%51
14″ M1 Mac has tremendous price to performance running DeNoise AI

The M1 Macs did more than hold their own, they were the very fastest in processing these 300 images!  Not by a wide margin, that fairly current PC was only a tiny 3% slower and it is highly likely that a PC with newer hardware could beat the M1.  The point of the test was not M1 vs PC.  The key takeaway here is that GPU is very important in making DeNoise AI run fast and Macs with M1 Apple Silicon have enough GPU capabilities to run DeNoise AI very well.

In fact, take a look at the GPU processor utilization on M1 Macs (NG=Not Gathered):

Computer HardwareRun Time% SlowerGPU Memory (%)GPU Processor (%)
AvgMaxAvgMax
16″ M1 Max, 64GB (M1)0:30:260%NGNGNGNG
14″ M1 Max, 32GB (M1)0:30:35-0.5%46773760
14″ M1 Pro, 16GB (M1)0:31:39-4%46783742
13″ M1, 16GB (M1)0:39:06-28%47764156
DeNoise AI didn’t use all of the GPU processing offered by M1

I don’t have the numbers on GPU utilization for the 16” M1 MacbookPro, but for the rest of the tests the GPU cores were never used above 60%. It could be the case that M1 Macs have a bottleneck with another resource preventing the GPU from being fully utilized.  However, after we consider the rest of the metrics from the tests I think it is more likely that Topaz Labs has room to improve their software to take better advantage of the new M1 Apple Silicon and make DeNoise AI even faster.

We have pretty well exhausted the takeaways from the GPU data, let’s move on to CPU.

Does Faster CPU Make DeNoise AI Faster?

We have already proven that DeNoise AI runs best when GPU processing capabilities are available.  Does that mean that the CPU in the computer doesn’t really matter?  I don’t have the extract testing scenario needed to say for sure, but we do have some testing data that is pretty close:

Computer HardwareRun Time% SlowerSec / Image
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:120%6
AMD-FX-8350, 32GB, RTX 2060 6GB (GPU)1:18:36-152%16
CPU still matters for DeNoise AI processing

The GPU in both of these computers is of the same generation.  There is a difference in memory on the GPU, but we have already seen that doesn’t seem to matter for the performance of DeNoise AI. There is also a difference in the number of CUDA cores, the 2080 has about 50% more GPU cores.  If the performance difference here was about 50% that would be a strong indicator that GPU is really all that matters.  

Instead we have a difference of 150%! There are pretty significant differences here between the CPU in both computers that have to account for that massive difference in performance. CPU affects the performance of DeNoise AI even when a powerful GPU is available and processing is set to happen on GPU.

This might be the actual explanation for why the performance of DeNoise AI with the Apple M1 processors was so good.  The CPU and GPU on the M1 processors are all in the same chip and connected the fastest way possible.  The GPU cores in the M1 may not be as powerful for the AI processing in DeNoise as the CUDA cores in NVIDIA graphics cards, but the combination of the CPU p-cores (performance) and the high efficiency of the communication between those p-cores and the GPU cores in the M1 processor is really good for running DeNoise AI.

Should photographers consider upgrading the CPU in their computers to get better performance out of DeNoise AI?  An upgrade of only the CPU in a computer is unlikely to make a significant difference to the performance of DeNoise AI.  Photographers should only upgrade the CPU in their computers when they are investing in a new computer for their photo editing needs.

CPU still matters when running DeNoise AI, but photographers should only worry about the CPU in their computer when it is time for a new computer.  Now let’s talk about RAM.

Does More RAM Make DeNoise AI Faster?

Let’s start off the discussion about RAM by taking a look at the specific metrics related to how DeNoise AI used RAM and swap space on these computers:

Computer HardwareRun Time% SlowerRAM (GB)Swap (GB)
AvgMaxAvgMax
16″ M1 Max, 64GB (M1)0:30:260%NG40NG0
14″ M1 Max, 32GB (M1)0:30:35-0.5%153201.2
Intel 10900K, 32GB, RTX 2080 8GB (GPU)0:31:12-3%182300
14″ M1 Pro, 16GB (M1)0:31:39-4%1015915
13″ M1, 16GB (M1)0:39:06-28%13151219
Intel 9700K, 64GB, GTX 1660 Ti 6GB (GPU)0:45:15-49%172300
Intel 4770K, 32GB, GTX 1060 3GB (GPU)0:57:48-90%182200
AMD FX-8350, 32GB, RTX 2060 6GB (GPU)1:18:36-158182300
Intel 10900K, 32GB, RTX 2080 8GB (CPU)1:27:00-186%192300
Intel NUC Core i7, 64GB, Intel UHD (GPU)2:03:41-306%202400
Intel NUC Core i7, 64GB, Intel UHD (CPU)3:58:39-684%354100
Intel 4770K, 32GB, GTX 1060 3GB (CPU)4:15:50-741%192300
More then 32GB of system memory (RAM) doesn’t make DeNoise AI faster

It is tough to compare these computers that have so many differences, but the data is very different between M1 Macs and PCs.  Let’s start with the PCs.

Except for a single anomaly with the Intel NUC doing the best it could without a real GPU, PCs didn’t use more than 23GB of RAM in the testing.  Even when a PC had 64GB of RAM available, only 23GB was used by DeNoise AI.  DeNoise AI does not benefit from more than 32GB of RAM on a PC.  

Should photographers who have 16GB of RAM in their PCs upgrade to 32GB to run DeNoise AI more quickly?  I don’t have the data to tell for sure, but my educated guess is that photographers are likely to see DeNoise AI run faster when a PC has at least 32GB of RAM.  Especially since photographers tend to have a lot of other applications open at the same time.

However, the recommendation gets a little different on the M1 Mac computers, where photographers are unlikely to see DeNoise AI run noticeably faster with more than 16GB of RAM.  It is going to take a minute to explain why.

There was some unexpected data in the testing that I am convinced has illustrated a bug in DeNoise AI on an M1 Mac.  To explain the issue we need to take a look at swap usage metrics.  When a computer runs out of RAM and needs more, it starts to use the storage in your computer as RAM. Traditionally that has been a massive problem because storage has been significantly slower than RAM.  Today this has become less of an issue on both PC and Mac with increased speeds of SSD storage.  

All of the PCs tested had enough RAM so that the swap never got used.  On the M1 Macs with 16GB of RAM the swap usage got all the way up to 15GB before DeNoise AI finally released about 25% of it down to 10GB only to have it rise up to 15GB again.

M1 Mac computers use a lot of swap space to process raw files in DeNoise AI

The M1 Macs needed about 40GB of RAM where the PCs only needed 23GB.  That’s about 74% more RAM to process the same raw images!  That shouldn’t be. This has to be a bug, some tuning Topaz needs to do to make DeNoise AI manage memory properly when processing raw files.

Even worse, there was a raw file type I couldn’t actually get to work on an M1 Mac with 16GB of RAM.  The swap grew uncontrollably to 50GB until MacOS finally had an out of memory error.  I think the raw processing in DeNoise AI on M1 Macs has a memory leak.  More on this in a moment.

The interesting thing to me is that even with this fairly big memory leak problem, the M1 Macs did ver well in the testing.  Usually a memory leak like this would cost a significant amount of performance. Even the 14” M1 Max with 16GB of RAM needing a massive 15GB of swap to process the files, the run time was only 4% slower.  Those are phenomenal results and is probably due to the extremely fast speed of the SSD inside the M1 Macs used for swap.

RAM matters for running DeNoise AI on PCs, enough that it might be worth the cost to upgrade.  RAM doesn’t matter as much for running DeNoise AI on M1 Macs.  Next we need to talk about storage speed.  

Should Photographers Run DeNoise AI In High, Medium, or Lowe RAM Mode?

Another test I had to run related to RAM was the setting in DeNoise AI for Low, Medium, and High RAM.  Does one of them dramatically reduce the RAM used and how does it affect performance?  After seeing the memory leak issue on the M1 Macs, I was hoping that I could use less swap by setting DeNoise AI to Low RAM mode.

Computer HardwareRun Time% SlowerRAM (GB)Swap (GB)
AvgMaxAvgMax
13″ M1, 16GB (High RAM)0:39:070%10151119
13″ M1, 16GB (Medium RAM)0:39:33-1%10151119
13″ M1, 16GB (Low RAM)0:42:13-8%815916
Even in Low RAM mode, DeNoise AI still used a lot of swap on M1 Macs

There was no difference in the RAM usage between the High and Medium RAM modes in DeNoise AI software where both used 19GB of swap space, though the time was 1% faster under High.  There was slightly less RAM used in Low RAM mode where the swap only went to 16GB, and it was 8% slower. It isn’t worth running DeNoise in anything but High RAM mode even if your computer has only 16GB of RAM.

Does Faster Storage Make DeNoise AI Faster?

Photographers have always had a bigger need for storage than the average person, which for most means connecting external drives to their computers.  Does using faster and more expensive storage make a difference in how fast DeNoise AI processed those 300 raw images?

To find out I ran tests across all of my computers using various speeds of storage.  I also had to test to see if it mattered if faster storage was used for the raw files and slower for the processed images or vice-versa.

Key: Int=Internal SSD, TB4=Thunderbolt 4, U3G2=USB 3.1 Gen 2, SATA=Internal SATA, U3=USB 3.0

Computer HardwareRun Time% SlowerRead (MB/s)Write (MB/s)Read Avail (MB/s)Write Avail (MB/s)
AvgMaxAvgMax
14″ M1 Pro, 16GB, Int to Int0:31:390%57005200
14″ M1 Pro, 16GB, TB4 to Int0:32:10-2%4715005200
14″ M1 Pro, 16GB, U3G2 to TB40:32:26-2%4516195801100
14″ M1 Pro, 16GB, TB4 to TB40:32:32-3%35161915001100
14″ M1 Pro, 16GB, Int to TB40:32:41-3%152857001100
14″ M1 Pro, 16GB, TB4 to U3G20:32:48-4%4615281500620
14″ M1 Pro, 16GB, U3G2 to U3G20:32:59-4%4671519650620
14″ M1 Pro, 16GB, U3 to U30:41:15-30%15811219100100
13″ M1, 16GB, Int to Int0:39:060%30002800
13″ M1, 16GB, TB4 to TB40:39:070%31262015001100
13″ M1, 16GB, U3G2 to U3G20:39:31-1%3611326580620
13″ M1, 16GB, Int to U30:43:29-11%12153000100
13″ M1, 16GB, TB4 to U30:43:43-12%31012201500100
13″ M1, 16GB, U3G2 to U30:43:52-12%8101214580100
13″ M1, 16GB, U3 to U30:44:57-15%3101220100100
4770K, 32GB, Int to Int0:57:480%2692321002600
4770K, 32GB, SATA to Int0:58:09-1%269231602600
4770K, 32GB, USB3 to Int0:58:57-2%269221002600
4770K, 32GB, Int to SATA0:59:59-4%269222100160
4770K, 32GB, SATA to SATA1:00:25-5%26922160160
4770K, 32GB, U3 to U31:05:24-13%26822100100
Storage speed had at most 30% impact on DeNoise AI performance

I recorded the average and max read and write speeds while DeNoise AI was running.  We have to throw out some of the numbers from the M1 Mac because of how the swap was being used.  Any time the internal SSD was being used the read/write speeds were mostly likely the swap that was being used and not reading/writing the images from DeNoise AI.  

Throwing out those numbers, across all the tests the highest speed of reads maxed at about 80MB/s and the highest speed for writes maxed at about 28MB/s.  Interesting since the max read/write speeds available was at worst 100MB/s.  That makes the math pretty easy.  At best DeNoise AI used about 80% of the read speed available and only 25% of the write speed available.  At worst, DeNoise AI used 0.1% of read speed and 0.2% of write speed available.

DeNoise AI doesn’t use the full speed of any storage across any of the testing though storage connected faster than USB 3.0 is noticeably better.

I have seen this behavior of not fully using the speed of the storage for years.  I have long said that USB 3.0 connected storage was not enough of a bottleneck to make a noticeable difference in performance.  USB 2.0 and older was so slow it was laughable, but USB 3.0 storage was fast enough most photographers were unlikely to see a noticeable difference as they process images in any software.  

That advice looks like it still holds true with my 8 year old PC where the performance difference was only about 13% different.  However, on current equipment where the IO capabilities have improved, the impact of using slower storage connected via USB 3.0 jumps up to 30%.  30% is big enough to be noticeable.  Slower storage still wasn’t terrible, especially when you factor in the cost of storage with faster connections, but 30% is bigger than I have seen in most testing of other software.  I think the advice that USB 3.0 connected storage is fast enough still holds today, but it is on shaky ground.

The data also shows that if a photographer has to make a choice about using a mix of faster storage with slower storage, they should have the faster drive be the destination where DeNoise AI writes the processed files and use the slower drive for the original raw images.

Does DeNoise AI Process Some Raw Files Faster Than Others?

The final thing I wanted to find out was if DeNoise AI processed a raw file type faster than another.  I have yet to see photo editing software NOT be impacted by the raw files that you feed it.  It doesn’t just come down to the file size of the raw. There are some that seem to be more complicated to process even though they have a smaller file size.

Computer HardwareRun Time% SlowerRead (MB/s)Write (MB/s)
AvgMaxAvgMax
Canon R6 CR30:32:320%351619
Olympus OMD-EM1M3 ORF0:33:12-2%331520
Panasonic GH5 RW20:33:36-3%341620
Nikon D3400 NEF0:26:21-12%3241735
Canon 5D3 CR20:37:32-15%4151521
Nikon D850 TIF0:58:28-80%472132
Nikon D850 NEF1:09:27-113%6111731
Sony A73 ARW1:14:00-127%5101427
Fuji XT-3 RAF1:33:29-187%26724
Some raw file types processed much slower in DeNoise AI

DeNoise AI has very different processing times by raw type with Fuji raw files being the slowest to process.

There is a massive note to make on this data.  I couldn’t get the 300 NEF images from the Nikon D850 to process on an M1 Pro Mac with 16GB of RAM.  An M1Max with 64GB of RAM did complete the processing, but used a lot of swap to get there.  Look at this (M1 RAM, Raw file type):

Computer Hardware% SlowerCPU (%)RAM (GB)SWAP (GB)GPU RAM (%)GPU Core (%)
AvgMaxAvgMaxAvgMaxAvgMaxAvgMax
16GB, Canon R6 CR30%24371011101549753946
16GB, Olympus OMD-EM1M3 ORF-2%24371011101637783948
16GB, Panasonic GH5 RW2-3%24361011101541743845
16GB, Nikon D3400 NEF-12%23341011132050803745
16GB, Canon 5D3 CR2-15%24411011111948833444
64GB, Nikon D850 NEF-74%18282328293617502743
16GB, Sony A7R3-127%25341010374641803258
16GB, Fuji XT-3 RAF-187%2631770044752843
NEF raw from Nikon D850 needs 64GB RAM + 36GB swap on M1

Processing raw files on M1 Mac computers in DeNoise AI is horrifyingly inefficient from a RAM usage perspective. Processing these same raw file types on a PC did not result in this high RAM and swap utilization. Something is very wrong here.

Due to the memory leak I talked about earlier, DeNoise AI used so much swap on the M1 Pro with 16GB of RAM the processing was stopped by MacOS after it reached 50GB of swap space at about 125 images.  The only way the Nikon D850 NEF raw files completed on the M1 Pro with 16GB of RAM was to process them 100 at a time, closing DeNoise AI between runs (which is what the time is in the table).

Converting those images to DNG didn’t fix the problem.  DeNoise AI was stopped after processing about 125 images after using 50GB of swap.  Converting those images to TIF on the other hand worked.  In fact, there was virtually no swap used at all in processing those images and they finished faster.  The downside is the conversion of the NEF to TIF took much longer than the time saved.

Throwing the max RAM at the problem did solve it, but barely.  An M1 Max with 64GB of RAM had to use 36GB of swap space in order to process the files.  It got through all 300, but it was 75% slower to do so on those Nikon D850 NEF files than the fastest Canon CR3 files.  Even more interesting was that on the M1 Max with 64GB of RAM the RAM utilization didn’t get above 28GB.  Something is wrong with how DeNoise AI processes all raw files, and it is worst with those from the Nikon D850.

Reminders

Did this article help you?

Comments

  1. Pingback: Photographer's Guide to Buying 2022 Mac Studio - Photo Taco Podcast

  2. Hi Jeff – This is great information as I am researching a new laptop for photo editing. I have a four year old Toshiba Laptop Intel core i5 processor and Intel HD Graphics 5500. I’ve upgraded to a 2GB SSID hard drive and to 16GB RAM. I’ve been a windows user, but I am also looking at the Macs to see which will give me better performance. I use DeNoise AI with R6 CR2 raw files and also Sharpen AI which I find is much more of a resource hog than DeNoise. Do you have experience with Sharpen AI that you can compare to your findings in DeNoise AI? I’d like to know if you do, if you recommend getting more CPU and graphics RAM for Sharpen AI users than what you recommend in the DeNoise AI test results. Thank you for all of your research and sharing of information on software and hardware. It is much appreciated.

  3. Hello Jeff. an old lenovo w541 users, i7 4810 32GB ram nvidia K 2100 samsung SSD. I work with architectural photos as a freelancer, what do you suggest I change with a low budget?. Faster CPU or more powerful GPU or both?
    Eastern European low budget 🙂 – Hello and thank you. Tom

    1. Author

      @Tom,

      Thanks for stopping by and asking your question. As is pointed out by the bold text in the article, GPU matters more to the performance of DeNoise AI than CPU. Therefore my advice would be to invest more heavily in a GPU that has more cores as that mattered the most in my testing over the amount of memory on the GPU.

  4. i7-8700 CPU: 9 seconds
    GT 1030 GPU: 13 seconds
    It’s a matter of what GPU versus what CPU. Granted that most GPUs are faster than most CPUs. I suppose a GTX 1050 ti would have used half the time of GT 1030 and therefore faster than the i7-8700. It’s just that not all GPUs are faster than all CPUs.

Leave a Comment