TWIty's Blog

All Around Microsoft Dynamics Business Central

💡 Debunking a Decades-Old Myth in Business Central: Efficiently Retrieving Grouped Data

In many Business Central implementations, there’s a recurring need: extract a distinct set of document identifiers from a filtered set of detail records. Whether you’re working with sales lines, purchase lines, ledger entries, or custom transactional data — the goal is the same: identify which documents are involved based on line-level criteria.

For years, a particular coding pattern has circulated as “best practice” in many organizations. It looks something like this:

local procedure GetSalesHeaderFromLines()
var
    SalesLine: Record "Sales Line";
    SalesOrders: List of [Code[20]];
begin
    SalesLine.SetRange("Document Type", SalesLine."Document Type"::Order);
    if not SalesLine.FindFirst() then
        exit;

    repeat
        SalesLine.SetRange("Document No.", SalesLine."Document No.");
        SalesLine.FindLast();

        SalesOrders.Add(SalesLine."Document No.");
        SalesLine.SetRange("Document No.");
    until SalesLine.Next() = 0;
end;

The idea behind this approach is to “skip” through the dataset, jumping to the last record of each group and collecting the document number. It feels efficient — fewer iterations, right?

But when tested against a simpler, more direct approach, the results tell a different story.

✅ The Simpler, Faster Alternative

Instead of skipping and resetting filters, just iterate through the filtered records and collect unique document numbers:

local procedure GetSalesHeaderFromLines()
var
    SalesLine: Record "Sales Line";
    SalesOrders: List of [Code[20]];
begin
    SalesLine.SetRange("Document Type", SalesLine."Document Type"::Order);
    if not SalesLine.FindSet() then
        exit;

    repeat
        if SalesOrders.Contains(SalesLine."Document No.") then
            continue;

        SalesOrders.Add(SalesLine."Document No.");
    until SalesLine.Next() = 0;
end;

This method avoids repeated filtering and jumping. It simply walks through the dataset once and uses a list to track uniqueness.

But we can be even more faster, if we know we set the correct key:

local procedure GetSalesHeaderFromLines()
var
    SalesLine: Record "Sales Line";
    SalesOrders: List of [Code[20]];
    LastDocumentNo: Code[20];
begin
    SalesLine.SetRange("Document Type", SalesLine."Document Type"::Order);
    if not SalesLine.FindSet() then
        exit;

    LastDocumentNo := '';

    repeat
        if SalesLine."Document No." = LastDocumentNo then
            continue;

        SalesOrders.Add(SalesLine."Document No.");
        LastDocumentNo := SalesLine."Document No.";
    until SalesLine.Next() = 0;
end;

📊 Benchmark Results

Here’s a performance comparison based on 500 executions across different variants:

Method Execution Time SQL Rows Read SQL Statements
Group Skip (FindSet) 1 min 6 sec 1,369,141 214,501
Group Skip (Find(‚-‚)) 1 min 5 sec 1,374,630 214,501
Group Skip (FindFirst) 57 sec 1,369,141 214,500
Simple Iteration with List 7.7 sec 249,500 500
Iteration with Last Variable 6.6 sec 249,500 500
Group with Query 306 ms 107,500 500

🔍 Why the Difference?

• The “group skip” method re-applies filters and performs additional reads per group. • Each SetRange and FindLast triggers extra SQL operations. • The simple iteration avoids all that — one pass, minimal reads, and deduplication handled in memory.

🧠 Conclusion

The myth of “skip logic” being faster doesn’t hold up under scrutiny. In fact, it’s significantly slower and heavier on SQL. If you’re extracting grouped identifiers from line-level records, favor a clean iteration with a uniqueness check.

Sometimes, the best optimization is the one that looks boring — and works brilliantly.

And of course, a perfect solution would be to use a Query instead (306ms), but queries are not decades old 😉.

query 57000 "Collect Sales Header"
{
    Caption = 'Collect Sales Header';
    QueryType = Normal;

    elements
    {
        dataitem(SalesLine; "Sales Line")
        {
            column(DocumentType; "Document Type") { }
            column(DocumentNo; "Document No.") { }
            column(DocumentCount)
            {
                Method = Count;
            }
        }
    }
}
local procedure GetSalesHeaderFromLinesQuery()
var
    SalesHeaderQuery: Query "Collect Sales Header";
    SalesOrders: List of [Code[20]];
begin
    SalesHeaderQuery.SetRange(DocumentType, SalesHeaderQuery.DocumentType::Order);
    if not SalesHeaderQuery.Open() then
        exit;

    while SalesHeaderQuery.Read() do
        SalesOrders.Add(SalesHeaderQuery.DocumentNo);

    SalesHeaderQuery.Close();
end;