Challenge

In GitLab-based development workflows, Merge Requests serve as the final quality gate before code is merged into the main codebase. They are critical for ensuring correctness, security, and long-term maintainability.

However, as development velocity increases, maintaining consistent review quality becomes significantly more challenging.

Reviewers are expected to quickly identify a wide range of issues, including accidentally committed credentials, inefficient queries, performance regressions, and small inconsistencies that gradually accumulate into technical debt. While these issues are often not complex, they are repetitive and require constant attention.

Over time, this creates a growing cognitive load. Reviewers spend a significant portion of their effort identifying the same patterns and re-explaining them across multiple Merge Requests. Under time pressure, this reduces attention to detail and shifts focus away from higher-value activities such as architectural decisions and system design.

As a result, maintaining a strong and consistent review culture at scale becomes increasingly difficult without additional support.

Solution

To support the review process without replacing it, we developed the Right&Above Engineering Assistant – a self-hosted AI-powered service integrated directly into GitLab Merge Request workflows.

The assistant is built on top of an internal AI platform deployed entirely within private infrastructure, where all components, including large language models, operate under enterprise security and governance standards.

It automatically joins Merge Request discussions and analyzes code changes within seconds, acting as an additional quality gate before merge.

Its core functionality includes:

  • Diff analysis with a clear verdict
    identifying critical issues that must be resolved before merge
  • Code quality suggestions
    providing improvements when no vulnerabilities are found, helping make code cleaner and easier to maintain
  • On-demand explanations
    allowing developers to mention the assistant and receive clear, plain-language explanations of issues, associated risks, and recommended fixes

The assistant continuously re-evaluates code with every new commit and adapts the depth of analysis depending on the scope of the changes.

Beyond automation, the assistant plays a broader role in the development process:

  • Reduces review friction by handling repetitive comments and routine checks
  • Reinforces engineering standards by consistently applying best practices
  • Supports developer growth by turning feedback into understandable, actionable insights

From an operational perspective, the solution is fully aligned with enterprise requirements:

  • deployed entirely within private infrastructure
  • no transmission of source code or metadata to external services
  • deterministic behavior ensuring consistent results across commits

The assistant supplements human reviewers, allowing them to focus on architecture and system-level decisions rather than repetitive checks.

Result

During internal usage, the assistant became a reliable additional layer in the code review process.

It consistently identified issues such as unintentionally exposed credentials, potential client-side vulnerabilities, and performance inefficiencies. At the same time, it provided a large number of incremental improvements related to code clarity and maintainability.

By taking over routine checks and repetitive feedback, the assistant reduced the operational load on reviewers and allowed them to focus more on architectural and design-level discussions.

This shift improved not only the efficiency of the review process, but also its quality. Reviews became more focused, more consistent, and more valuable from a long-term engineering perspective.

In parallel, the assistant supported knowledge sharing within the team by turning everyday feedback into clear, contextual explanations that developers could immediately apply in their work.