Using AI to Detect Runtime Errors in Static Code Analysis

As software development continues to advance, the demand for more sophisticated and effective methods of detecting and preventing errors in code has increased. One of the promising areas of research is the use of artificial intelligence (AI) to detect runtime errors in the form of static code analysis. In this blog post, we examine the potential of AI to detect runtime errors and discuss the current state of research in this field.

Runtime errors occur during the execution of code. These errors can cause the software to behave unpredictably, which can result in crashes, data corruption, and other problematic outcomes. Detecting and preventing runtime errors is a critical aspect of software development. Traditional methods of error detection and prevention are time-consuming and happen in the later stages of the software development process.

AI has the potential to revolutionize the way that runtime errors are detected and prevented. By analyzing large amounts of code and data, AI algorithms can learn patterns and correlations that are indicative of runtime errors. This information can then be used to identify potential errors in new code before it is executed.

Current State of AI in Runtime Error Detection

While the potential of AI for detecting runtime errors is promising, the field is still in its early stages. There are currently a limited number of AI-based static analysis tools available for detecting runtime errors. However, research in this field is rapidly advancing and it is likely that AI will play an increasingly important role in runtime error detection. At Metabob, we have been able to accomplish detecting runtime errors such as GPU/CPU initialization errors, memory leaks, and race conditions using our AI.

A little bit about our technology at Metabob...

We have built a graph attention-based neural network that is used to classify problematic code and embed context info. We employ a two-stage system for accurately embedding context information within a single graph. First, we split up the source code into semantic tokens through an nlp2 tokenizer and generate 80-bit vector embeddings using FastText, which has been trained on code snippets of a particular language.

We then map those text tokens to groupings identified in the abstract syntax tree, excluding the individual nodes for each text token, opting instead for the function call with attributes as the smallest individual grouping, averaging the embeddings across each token type.

The seed data for the system consists of code changes and their surrounding documentation on why a given code change was made. For this, we utilize a BERTopic-based topic modeling system to identify and categorize the reason why the given change was made from the docs.

To end with some thoughts here...

AI-based tools for static code analysis such as Metabob will increasingly help developers prevent coding errors, and particularly more complex ones. As AI algorithms continue to improve and more data becomes available, it is likely that AI will make detecting runtime errors significantly more effective.

I am currently looking for feedback on our tool that is available as a vscode extension to analyze, debug and refactor Python code. The tool is free to use. The link to the tool can be found here. More information about Metabob can be found at https://metabob.com/, and you can review our demo video here. If you end up testing the tool or have feedback for other reasons, please reach out to me at .