How to Dodge SAST Findings

No development advice.

CodeThreat
10 min readSep 1, 2021

Your company has the security scanning tools. Tools are executed on your applications. And they produce findings. But guess what? Mitigating these issues is pain in the neck. You may just wish these findings to disappear instead of spending time to understand and mitigate them.

If you are a developer, you are in the right place. In this article, we will go unorthodox and try to explain how you can avoid some of the findings of static application security testing (SAST) tools, without actually fixing them.

Basic camouflage techniques in software security.

Just the last sentence above is enough to give a security expert goosebumps. Therefore, let’s leave the disclaimer here hopefully to prevent starting a flame-war;

None of the methods listed or implied in this article should be applied in real life. This article should be seen both as an informative content about the behavior of static security tools, which should have an important role in the daily life of every software team, as well as showing once again that we, the developers, are solely responsible for the security of the code we write.

Verification vs. Bug Finding

OK. Now let’s start with a fundamental information; if the automated security tool (SAST or DAST) you are using shows no evidence of SQL Injection security weakness in the your application, it does not mean that your application is actually free from it.

This situation is known by almost everyone who deals with information security. In more technical terms, automated security scanners are not general application verifiers for any security vulnerabilities or code quality findings of a certain complexity.

These tools, by their nature, cannot provide general verification. For those who are curious, I recommend, albeit a bit old, the stimulating article of “Social Processes and Proofs of Theorems and Programs” dated 1979.

However, the same tools, as engineering products, do a very good job of finding bugs in software. In fact, based on my experience, I can easily state that;

SAST products are the best tools we have for finding security flaws in our application for certain types of high-risk findings.

Assumptions and Approximations

Our general expectation from static application security testing tools is that they reveal all security vulnerabilities found in any software. However, due to complexity and performance problems, this unrealistic expectation turns into a working and useful product only with some assumptions and approximations.

Let me try to explain with a simple statistic;

A code review* that a developer can perform in 4 or 5 days, can be done in 1–2 hours** by an automated product.

In addition to the incredible security benefits it brings, we also know that these assumptions and approximations have an impact on both false alarm (FP) and missed findings (FN) rates.

Although unlikely, it’s possible for both types of deficiencies to be exploited by developers to let code errors that they don’t want to fix slide by SAST tools. :)

Let’s start to examine these abuse scenarios, some of which are very obvious and some that require a little more work, under five headings.

Bypassing Sensitive Data Management Controls

One of the most difficult issues in software is sensitive data management. Data such as personally identifiable, encryption keys, API tokens, system, health, membership information are considered to be sensitive data according to many security standards such as NIST SP 800–53 and GDPR.

So, how can an automated program understand whether there’s a sensitive data exists in the code, how it is processed and how it flows through?

Valid and difficult questions. But let’s just make a simple guess about the first one.

A static application security testing tool can deduce a password defined in the script or configuration file from the corresponding variable or key name.

For example, since the variable in the following Java code contains the word passw, our tool may conclude that the defined embedded text, a hard-coded literal, is a password. This is a technique that can produce many false alarms on its own, but on the other hand, it is the most effective method of minimizing the rate of missed findings.

String password = "v1o!gjs49#1glb53*";

Let me point out right away, there are much more reliable and feasible methods that can be used in tandem. But this is a simple to understand example explaining our goal here.

Changing the names of variables is the most natural way to circumvent some of these findings. Because this way, the automated tools will not attribute these declarations with hard-coded values as sensitive data.

Here are two more, perhaps more sophisticated, methods to circumvent SAST tools detecting our hard-coded secrets. The first is to use different data types, for example an array;

String secrets [] = { "v1o!gjs49#1glb53*" };

The second method is another technique in the same direction;

class PasswordStore
{
public String value;
}
PasswordStore ps = new PasswordStore();
ps.value = "v1o!gjs49#1glb53*";

If you notice, the difference between these two methods from the original one is that there may be no need to change the variable or class names. :)

Hiding Control Flow Findings

Control flows of programs have an important place in static program analysis techniques. By calculating these flows, data flows are produced more precisely and easily. Moreover, security vulnerabilities related to the control flows of the statements can be revealed.

Database, network, file resources that are not released or released incorrectly can be given as examples of such vulnerabilities. The issue we will about to see here is about faulty exception handling.

try
{
// a code block that can produce exceptions at runtime
}
catch(AnExceptionType aet)
{
}

Looking at the code, I’m sure you immediately caught the code quality problem in the example above. Within the catch block (assuming there is no final block), no action is taken. Here, it would be a much more logical scenario to forward exception to an upper layer without catching it if there’s nothing to do when one is thrown.

In this case, a SAST tool most definitely will want you to take actions inside the catch block. Even recommends you to implement a correct final block.

To avoid such findings without making any changes, the first thing that comes to mind is to fill the catch block with ineffective statements.

try
{
// a code block that can produce exceptions at runtime
}
catch(AnExceptionType aet)
{
int happy_SAST_happy_Developer = 5;
}

What a “creative” way of mitigating a valid problem, right?

It is highly probable that we no longer see this finding in our list of findings, which we have swept it under the rug, and ignored.

Hiding security findings without fixing them is sweeping all the dirt under the rag.

We are now good to go, until we were given the task of solving a difficult runtime error on production, since the lines in the log files would not be of use because we have swallowed the exception…

Camouflaging Dangerous Data Flows

Data flow is one of the most important parts of a precise static analysis tool. Injection-type findings, which are especially troublesome for applications, can be found with this analysis type satisfying low false alarm (FP) and missed finding (FN) rates.

To ensure that both rates are low, analysis tools use many algorithms, shortcuts and approximations (e.g. k-bounded de-referencing, sensitivities, points-to analysis, etc.).

Well, for an analysis tool, can we have low rates of FPs and FNs as well as not employ any shortcuts and approximations? If we do, two things will follow;

  1. Static analysis scans will take considerably longer time to finish.
  2. This will incur more costs for designing, implementing and maintaining the tools.
  3. I wrote two items, but let me add one more. And that is, in some cases, it is not even possible to perform static analysis without using approximations.

While it is not natural to want to hide a SQL Injection vulnerability, it is possible to code data flows where SAST tools cannot calculate due to the aforementioned costs.

Here, I will confine myself to showing an example of one of these techniques, which is very dangerous and ironically more difficult than correcting the finding :);

static void Main(string[] args)
{
C a = new C();
C b = a;
string input = System.Console.ReadLine();
Mix(a, b, input);
}
private static void Mix(C a, C b, string n)
{
a.x = n;
System.Diagnostics.Process.Start(b.x);
}

Reviewing the data flow in the C# code above, it’s obvious that it contains a Command Injection vulnerability. Tracking of references, you will be able to find this vulnerability because the a and b arguments of the Mix method actually point to the same memory block.

However, this task is more difficult for a robust analysis algorithm (implementing points-to-analysis) than reviewing by eye. Although there are well-known algorithms that can be used for this task, they are often not fully implemented due to their cost.

In short, calculating which memory blocks the references point-to is one of the most basic but most difficult problems of static program analysis that has been studied for decades. And we can exploit this for dodging SAST tools. I am ashamed of writing such a thing, but I’m also laid-back knowing that exploiting these weaknesses require a substantial understanding of static code analysis. It is easier to mitigate the problem than delving into such a task.

Abusing Unsupported Language Features

The language features that analysis tools must support are both numerous and sometimes very complex. For example, all collection data types, such as HashMap, Dictionary, should be supported, as well as properties that may trigger complicated control flows, such as throw statements.

In addition, complex and difficult to understand structures and APIs such as Generics, Inheritance, and Reflection should generally be included in the analysis algorithm.

No analysis tool can support all these structures due to complexity and time cost. And sometimes they cannot support these structures for technical reasons. So, these analysis tools utilize shortcuts. Especially tools with more than one language support, the difficulty of this task climaxes.

That is to say, the effort to translate all languages ​​into a single model in order to reduce these costs somewhat adds salt and pepper to the difficulty.

Let’s continue with a JSF example of how language or framework richness can affect the finding quality. For those who are not familiar, JSF (Jakarta/Java Server Faces) is a Java specification that we can use to create component-based user interfaces for web applications. It was very popular among Java web developers 10–15 years ago.

Component-based server-side frameworks are often not welcome, especially in dynamic security audits, due to ViewStates and request/response parameter complexity. However, as in all web applications, it is possible for us, developers, to create serious security vulnerabilities by using these frameworks.

For example, the following XHTML, JSF code sends a parameter value containing rich content named bio in the HTTP request to the browser. Due to this rich content, the “escape” attribute, which was true by default, was set to false.

As such, this JSF code, which contains Cross Site Scripting (XSS) vulnerability, can be captured with security static code analysis tools.

<h:body>        
<h:outputText value="#{param['bio']}" escape="false"/>
</h:body>

If we want to circumvent this finding without correcting it in accordance with our theme, we can actually create a custom template that does the same job as the code above.

<ui:composition xmlns:ui="http://java.sun.com/jsf/facelets" 
xmlns:h="http://xmlns.jcp.org/jsf/html">
<h:outputText value="#{param['bio']}" escape="false"/>
</ui:composition>

It is possible to position this template on our original XHTML page as follows.

<h:body>        
<my:customOutputText/>
</h:body>

So in this way, we customize the structure that leads to XSS with a technique unique to the JSF application framework and use it instead.

It is not plausible to expect static analysis tools to support all the features of programming language or application frameworks. Therefore, with the above structure, it is possible to circumvent the XSS finding without solving it.

I won’t list any examples here, but the same detour can be followed using the languages’ Inheritance and Reflection features.

Utilizing 3rd Party Libraries

Nowadays, it’s nearly impossible to write a decent application without using any 3rd party libraries. Our projects always include an external library for one requirement or another. And this is understandable since it’s all about speed, right? We don’t have time to implement all these requirements, if we had to write them all by ourselves.

Moreover, for a static analysis tool, naturally, it’s impossible to analyze non-existing code. So, for instance ,during the analysis when they come across a method call which the implementation code doesn’t exist, they choose to either halt, due to build error, or give up on calculating the flows, or perhaps make an assumption.

The first alternative is unwanted. Nowadays, users ask for analysis tools with no-build requirements.

The second alternative, while producing False Negatives, can be abused. A third party library code can be used to wrap dangerous API calls. And since the analysis tools will not have the actual code, the sinks (the dangerous API calls) will be hidden.

The third alternative is similar to the second one. However, it can also produce more False Positives.

In short, most of the alternatives will result in losing the flows, therefore, losing a possible security issue.

Still, it’s worth adding; it is best not to follow this ways to dodge the findings as they probably will require less effort to resolve .

But techniques similar to the ones described here can be used as a means of entertainment between you and the security units, provided that the findings are corrected before the code is deployed on production ;)

What Does All This Mean?

It’s worth repeating. It is not the purpose of this article to hide static code analysis findings without actually fixing them.

Hiding the findings without correcting them will bring very serious security and possibly regulatory risks..

Although provocative, what the article wants to tell curious developers is that all automated security tools, including static analysis tools, have deficiencies in their methods of locating issues.

Although these tools should be an integral part of our development process, helping teams to pinpoint security findings in the code, they do not change the fact that code security is our responsibility as developers.

We should own the responsibilities of security findings just as we own the other bugs in the code we write.

For this reason, it is important that we, as development teams, understand the pros and cons of SAST tools. These tools make us conscious about the security issues of our code, watch our back and speed us up.

* The developer’s code review rate is taken as 1 hour per 400 lines, and 80K-100K lines are considered as the total number of lines to be analyzed.

** I’m talking about 200–300 of checks here for injection problems like SQLi, XMLi, XSS, Directory Traversal, configuration or major cryptographic vulnerabilities.

--

--

CodeThreat

CodeThreat is a static application security testing (SAST) solution. Visit codethreat.com