Somewhere in your organisation, there is a system that has been running for 20 or 30 years. It still works. It still processes money, policies, accounts, or transactions every single day. It probably talks to other systems using queues, fixed-length messages, and COBOL copybooks. And chances are, it is integrated through middleware like IBM ACE.
The pressures around such systems are familiar:
- The business wants speed and faster change.
- Developers want maintainable and testable code.
- Operations want stability and zero data loss.
Modernising a system like this is not about rewriting everything or replacing it with the latest framework. It is about respecting what already works, while building safer and more maintainable integration layers around it.
What Is Integration and Why Do We Need It?
To understand integration, it helps to think about communication between people who speak different languages.
Imagine a situation where one group of people speaks isiXhosa and another group speaks English. Both languages are valid. Both are rich, expressive, and capable of conveying complex ideas. The problem is not that one language is better than the other the problem is that the two groups cannot understand each other directly.
If these two groups need to exchange important information, such as instructions, agreements, or historical knowledge, forcing one group to abandon its language and immediately adopt the other would be unrealistic and risky. Learning a new language takes time, and during that learning process, meaning can easily be lost or misunderstood. In situations where accuracy matters, misunderstandings can have serious consequences.
The practical solution is a translator someone who understands both isiXhosa and English and can translate messages accurately from one language to the other without changing their meaning. The translator does not invent new information, remove details, or “simplify” the message in a way that alters intent. Their role is to preserve meaning, tone, and accuracy so that both sides walk away with the same understanding.
Software integration plays the same role between systems.
In an enterprise environment, legacy systems often “speak” one language for example, fixed-length messages defined by COBOL copybooks and delivered through message queues. Modern systems usually “speak” a different language such as JSON over HTTP using REST APIs. Both systems exist because they solve real business problems, and both are often critical to daily operations.
Integration is the layer that understands both “languages”. It translates data formats, maps fields correctly, enforces rules, routes messages, and handles errors. Most importantly, it ensures that information sent by one system is received and interpreted correctly by the other, without losing data or changing meaning during the transformation.
We need integration because systems do not evolve at the same pace. Businesses cannot pause operations while every legacy system is rewritten. Integration allows organisations to modernise gradually, introducing new services while continuing to rely on trusted older systems. It enables cooperation instead of conflict between old and new technology.
In simple terms, integration exists to make sure systems understand each other, just as a translator ensures that isiXhosa and English speakers can communicate clearly and accurately.
What Is a Queue and Why Legacy Systems Use It
A queue is a messaging mechanism that allows systems to communicate asynchronously. Instead of one system calling another directly and waiting for a response, it sends a message to a queue. The receiving system reads the message when it is ready.
Queues provide several critical guarantees that legacy environments rely on:
Reliability - messages are not lost if a system goes down
Decoupling -sender and receiver do not need to be online at the same time
Back-pressure handling - systems can process at their own pace
Once you understand queues, the next question becomes: what sits between these systems to read messages, transform them, and send them onward?
IBM MQ is a common implementation of this pattern. Messages placed on a queue remain there until they are successfully consumed. This makes queues ideal for financial and transactional systems where data loss is unacceptable.
Why Middleware Like IBM ACE Exists
As organisations grew, they needed a central place to manage integrations. This is where middleware such as IBM App Connect Enterprise (IBM ACE) comes in. IBM ACE was designed to sit between systems and act as an integration hub. It reads messages from queues, understands legacy formats, applies transformation rules, and routes messages to the correct destination. Its strength is not business logic. Its strength is discipline and control. *IBM ACE *knows how to handle fixed-length messages, copybooks, retries, error handling, and guaranteed delivery. For many years, it has been the safest way to integrate critical systems.
However, as development practices shifted toward microservices and code-based delivery, teams began looking for lighter and more flexible alternatives that still respected the same integration principles while remaining safe for mission-critical workloads.
In legacy environments, IBM ACE typically:
- Reads messages from IBM MQ queues
- Parses copybook-based fixed-length messages
- Applies routing and transformation logic
- Sends messages to downstream systems
IBM ACE has been extremely successful because it understands legacy formats deeply and handles reliability concerns well. The challenge is that development and maintenance can be slow, and teams often want lighter, more modern integration stacks. This is where modern, code-centric integration stacks such as Spring Boot and Apache Camel come into the picture.
Read more about IBM ACE here : IBMDOCS
Read more about Spring boot here : 100 Days of Spring Boot - A Complete Guide For Beginners
Why Apache Camel Is a Good Modern Fit
Apache Camel exists because integration is still necessary, but the way we build software has changed.
Apache Camel is an integration framework that provides the same core integration patterns as enterprise middleware, but in a lightweight, developer-friendly way. Instead of building integration logic visually or in proprietary tooling, Camel lets developers define message flows using readable, declarative routes. Camel focuses on orchestration, not business logic.
Camel focuses on:
- Routing messages between systems
- Transforming data formats
- Integrating with queues, files, databases, and HTTP APIs
- Keeping integration logic readable and testable
Camel does not replace business logic. Instead, it orchestrates how messages flow through the system. This makes it a strong choice when modernising legacy integrations without breaking existing contracts. But to connect them safely, Camel must understand legacy message formats. This brings us to the most critical concept of all: the copybook.
read more about apache camel here on their official documentation : CamelDocs
Why You Must Understand Copybooks First
Before touching Spring Boot or Apache Camel, you must understand what you are dealing with.
A copybook is not “just a data definition”. It is a contract that multiple systems depend on and trust.
In COBOL-based and other legacy environments, a copybook defines the exact structure of a fixed-length message or record. It specifies where each field starts, how long it is, how many digits it contains, whether it is numeric or text, and how decimals or signs are represented. The message itself is just one long string of characters or bytes. There are no field names, no separators, and no self-describing metadata.
Every system that reads or writes that message relies on the copybook to interpret it correctly. Because everything is position-based, shifting even a single character breaks the entire message. That strictness is not a weakness; it is how data integrity is enforced in legacy systems.
Example Copybook
01 PAYMENT-REQUEST.
05 REQUEST-ID PIC X(10).
05 ACCOUNT-NUMBER PIC 9(10).
05 ITEM-COUNT PIC 9(2).
05 ITEMS OCCURS 3 TIMES.
10 ITEM-CODE PIC X(5).
10 ITEM-AMOUNT PIC 9(7)V99 COMP-3.
What this defines
- Field order (sequence is critical)
- Field size (number of bytes)
- Data type
- Decimal precision
- Repeating structures (OCCURS)
When a message is sent:
-
There are no field names
-
There is no delimiter
-
Everything is positional ,If one byte is misinterpreted, the entire message becomes invalid.
This is why data integrity must never be taken for granted during migration.
This message will look exactly like this.
Same order. Same length. Same meaning. Every time.
There are no field names in the message. There is no JSON. There is no flexibility. If the copybook says the first 10 characters are REQUEST-ID, then the first 10 characters must be that. If you shift even one character, everything after it is wrong. That’s why people say copybook systems are “fragile”. They’re not fragile, they’re strict.
i am a copy book message with length of 40
The string above is exactly 40 characters, including spaces. If the copybook says that the word “copy” starts at position 8 and ends at position 11, then you must read exactly those positions. Counting incorrectly, shifting a field, or removing a single space corrupts the entire structure. That is how strict copybooks are.
They are often described as “fragile”, but that description is misleading. Copybook-based systems are not fragile; they are precise and unforgiving by design.
Why Data Integrity Is Non-Negotiable In legacy environments:
- A wrong decimal can move money incorrectly
- A wrong sign nibble can flip debit/credit
- A shifted field can corrupt every downstream system
During migration:
- You must preserve semantic equivalence
- You must preserve numeric precision
- You must preserve field alignment
- You must preserve business meaning
- A system that “mostly works” is not acceptable
Why We Need Bindy
When working with microservices, we want to deal with structured objects, not raw strings. But legacy systems do not send JSON or XML. They send fixed-length records.
Bindy exists to bridge that gap.
Bindy is a data-binding component provided by Apache Camel. Its purpose is to convert flat, position-based messages into Java objects and back again, without breaking the legacy contract.
What Bindy Fixed-Length Actually Means
Bindy supports different formats. For copybooks, we use Bindy fixed-length.
In fixed-length mode:
- Each field has a defined length
- Fields are read sequentially
- There are no delimiters
- Order matters more than anything
The pos attribute in Bindy is not a byte offset. It is a sequence number. Bindy reads fields in order and uses the length to know how many characters to consume.
This approach mirrors how copybooks work and prevents common mistakes.
How we model this with Bindy (important: pos is incremental)
Bindy pos is a sequential field index (1,2,3,4). Do not treat pos as a byte offset. For the OCCURS block we model it as a single @DataField and attach a converter that returns List
// PaymentRecord.java
import org.apache.camel.dataformat.bindy.annotation.FixedLengthRecord;
import org.apache.camel.dataformat.bindy.annotation.DataField;
import org.apache.camel.dataformat.bindy.annotation.BindyConverter;
import java.util.List;
@FixedLengthRecord
public class PaymentRecord {
@DataField(pos = 1, length = 10)
private String requestId;
@DataField(pos = 2, length = 10)
private String accountNumber;
@DataField(pos = 3, length = 2)
private String itemCountRaw; // keep as String for initial validation
// OCCURS block: 3 * (5 + 9) = 42 chars in this example
@DataField(pos = 4, length = 3 * 14) // adjust to your actual sizes
@BindyConverter(ItemOccursTextConverter.class)
private List<PaymentItem> items;
// getters / setters omitted for brevity or use lambork
}
The create your table
PaymentItem is a small POJO:
public class PaymentItem {
private String itemCode;
private BigDecimal amount; // scale = 2
}
Why OCCURS Requires Special Handling
An OCCURS block represents repeating data. In fixed-length messages, this repetition is not dynamic. The space is always reserved, even if fewer items are present.
Bindy does not automatically understand repeating groups. For this reason, the entire OCCURS block is treated as a single field and parsing is delegated to a custom converter.
The converter receives the raw substring and is responsible for slicing it into fixed-size segments and converting each segment into a domain object.
This keeps the route clean and keeps copybook logic in one place.
The Bindy converter for OCCURS (text numeric variant) This converter receives the substring representing the entire OCCURS block. It splits it into item slices and
returns a List
<!-- // ItemOccursTextConverter.java
import org.apache.camel.dataformat.bindy.Format;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List;
public class ItemOccursTextConverter implements Format<List<PaymentItem>> {
private static final int OCCURS = 3;
private static final int ITEM_CODE_LEN = 5;
private static final int ITEM_AMOUNT_LEN = 9; // unscaled digits (e.g., "000001234")
private static final int ITEM_LEN = ITEM_CODE_LEN + ITEM_AMOUNT_LEN;
private static final int SCALE = 2;
@Override
public List<PaymentItem> parse(String text) throws Exception {
List<PaymentItem> items = new ArrayList<>();
if (text == null) text = "";
text = String.format("%-" + (OCCURS * ITEM_LEN) + "s", text);
for (int i = 0; i < OCCURS; i++) {
int start = i * ITEM_LEN;
int end = start + ITEM_LEN;
String block = text.substring(start, end);
String code = block.substring(0, ITEM_CODE_LEN).trim();
String amountRaw = block.substring(ITEM_CODE_LEN).trim();
if ((code == null || code.isEmpty()) &&
(amountRaw == null || amountRaw.isEmpty() || Integer.parseInt(amountRaw.isEmpty() ? "0" : amountRaw) == 0)) {
continue; // empty slot
}
BigDecimal amount = new BigDecimal(amountRaw).movePointLeft(SCALE);
items.add(new PaymentItem(code, amount));
}
return items;
}
@Override
public String format(List<PaymentItem> items) throws Exception {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < OCCURS; i++) {
if (i < items.size()) {
PaymentItem it = items.get(i);
String code = String.format("%-" + ITEM_CODE_LEN + "s", it.getItemCode());
BigDecimal unscaled = it.getAmount().movePointRight(SCALE);
String amt = String.format("%0" + ITEM_AMOUNT_LEN + "d", unscaled.longValue());
sb.append(code).append(amt);
} else {
sb.append(String.format("%-" + ITEM_CODE_LEN + "s", "")).append(String.format("%" + ITEM_AMOUNT_LEN + "s", "").replace(' ', '0'));
}
}
return sb.toString();
}
// please this is example code additional data will be required to make it work, just structure
} -->
Why this design? You keep parsing logic local to the converter (where Bindy expects it) and preserve string/raw behavior until later verification.
Why Camel Routes Must Stay Thin
At this stage, we have
- Queues for reliable delivery
- Camel for orchestration
- Bindy for fixed-length parsing
- Converters for copybook precision
Now we must decide where logic lives.
Camel routes should not contain business rules. Their job is to move data safely and predictably.
A good route:
- Preserves the raw message
- Unmarshals with Bindy
- Performs structural validation
- Delegates business logic to beans
- Calls downstream APIs
- Marshals the response back
- Handles failures consistently
Business logic belongs in services that can be tested independently.
The Camel route thin, readable, and production-minded
The route below shows the responsibilities: preserve raw message, unmarshal with Bindy, quick structural validation, hand off to a bean for business logic, call an API, validate response, map back, marshal, and send to output queue. Errors go to a DLQ with retries.
/
/ PaymentRoute.java (Spring @Component)
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.bindy.fixed.BindyFixedLengthDataFormat;
import org.springframework.stereotype.Component;
@Component
public class PaymentRoute extends RouteBuilder {
@Override
public void configure() {
onException(Exception.class)
.maximumRedeliveries(3)
.redeliveryDelay(2000)
.handled(true)
.process(exchange -> {
Exception ex = exchange.getProperty(org.apache.camel.Exchange.EXCEPTION_CAUGHT, Exception.class);
String original = exchange.getProperty("rawMessage", String.class);
String errJson = "{\"error\":\"" + ex.getMessage() + "\",\"raw\":\"" + original + "\"}";
exchange.getIn().setBody(errJson);
})
.to("log:errors?level=ERROR")
.to("jms:queue:PAYMENT.DLQ");
BindyFixedLengthDataFormat bindy = new BindyFixedLengthDataFormat("com.example.model");
from("jms:queue:PAYMENT.IN")
.routeId("payment-legacy-adapter")
.setProperty("rawMessage", body())
.unmarshal(bindy) // -> PaymentRecord (strings + items list)
.process(exchange -> {
PaymentRecord rec = exchange.getIn().getBody(PaymentRecord.class);
int declared = Integer.parseInt(rec.getItemCountRaw().trim());
int parsed = rec.getItems() == null ? 0 : rec.getItems().size();
if (declared != parsed) throw new IllegalArgumentException("itemCount mismatch");
})
.bean("requestProcessor", "process") // map & business logic -> PaymentRequest
.to("http4://payments.internal/api/payments")
.process(exchange -> {
Integer code = exchange.getIn().getHeader(org.apache.camel.Exchange.HTTP_RESPONSE_CODE, Integer.class);
if (code == null || code < 200 || code >= 300) {
throw new RuntimeException("Payment API returned status " + code);
}
})
.unmarshal().json(org.apache.camel.model.dataformat.JsonLibrary.Jackson, ApiResponse.class)
.bean("responseProcessor", "process") // map API response -> PaymentResponse
.marshal().bindy(org.apache.camel.dataformat.bindy.BindyType.Fixed, PaymentResponse.class)
.to("jms:queue:PAYMENT.OUT");
}
}
Example beans: request and response processors
These are deliberately simple templates place your conversions and validations here.
import org.springframework.stereotype.Component;
@Component("requestProcessor")
public class RequestProcessor {
public PaymentRequest process(PaymentRecord rec) {
PaymentRequest req = new PaymentRequest();
req.setRequestId(rec.getRequestId().trim());
req.setAccountNumber(rec.getAccountNumber().trim());
req.setItems(rec.getItems()); // items are already PaymentItem
// business validation here
return req;
}
}
java
Copy code
// ResponseProcessor.java
import org.springframework.stereotype.Component;
@Component("responseProcessor")
public class ResponseProcessor {
public PaymentResponse process(ApiResponse apiResp) {
PaymentResponse resp = new PaymentResponse();
// map fields from apiResp to legacy response POJO
return resp;
}
}
//Design these as pure, testable methods
Why Dead-Letter Queues Are Essential
- No system is perfect. Messages will fail.
- When they do, we must not lose them.
- A dead-letter queue exists to capture failed messages along with enough context to understand what went wrong. This allows operators to investigate, fix issues, and replay messages safely.
- In enterprise systems, the ability to explain what happened is just as important as processing messages successfully.
Final Conclusion
If you follow the flow from queues, to middleware, to Camel, to copybooks, to Bindy, the picture becomes clear.
Modernising legacy systems is not about replacing everything. It is about building a disciplined integration layer that respects existing contracts while enabling microservices to evolve.
When you understand why each tool exists, what problem it solves, and how it connects to the next, you can build integrations with confidence.
That confidence is what allows legacy systems and modern microservices to coexist safely in production.
Final words what you now have and next steps
You now have:
A concrete Bindy model that treats OCCURS as a single field. A @BindyConverter pattern to parse repeating groups. A COMP-3 unpack helper for packed decimals (and a reminder to test carefully). A thin Camel route that preserves raw messages, validates structure, delegates business logic to beans, talks to external APIs, maps responses back, and handles failures to DLQ. A unit-test example for converter verification.
— Anonymous