There are many technical solutions to this problem, as others have pointed out. What I would add is that data at the edge should be considered immutable.
If records are allowed to change, then you end up in situations where changes don't converge. But if you instead collect a history of unchanging events, then you can untangle these scenarios.
Event Sourcing is the most popular implementation of a history of immutable events. But I have found that a different model works better for data at the edge. An event store tends to be centrally localized within your architecture. That is necessary because the event store determines the one true order of events. But if you relax that constraint and allow events to be partially ordered, then you can have a history at the edge. If you follow a few simple rules, then those histories are guaranteed to converge.
Rule number 1: A record is immutable. It cannot be modified or deleted.
Rule number 2: A record refers to its predecessors. If the order between events matters, then it is made explicit with this predecessor relationship. If there is no predecessor relationship, then the order doesn't matter. No timestamps.
Rule number 3: A record is identified only by its type, contents, and set of predecessors. If two records have the same stuff in them, then they are the same record. No surrogate keys.
Following these rules, analyze your problem domain and build up a model. The immutable records in that model form a directed acyclic graph, with arrows pointing toward the predecessors. Send those records to the edge nodes and let them make those millisecond decisions based only on the records that they have on hand. Record their decisions as new records in this graph, and send those records back.
No matter how you store it, treat data at the edge as if you could not update or delete records. Instead, accrue new records over time. Make decisions at the edge with autonomy, knowing that they will be honored within the growing partially-ordered history.
If records are allowed to change, then you end up in situations where changes don't converge. But if you instead collect a history of unchanging events, then you can untangle these scenarios.
Event Sourcing is the most popular implementation of a history of immutable events. But I have found that a different model works better for data at the edge. An event store tends to be centrally localized within your architecture. That is necessary because the event store determines the one true order of events. But if you relax that constraint and allow events to be partially ordered, then you can have a history at the edge. If you follow a few simple rules, then those histories are guaranteed to converge.
Rule number 1: A record is immutable. It cannot be modified or deleted.
Rule number 2: A record refers to its predecessors. If the order between events matters, then it is made explicit with this predecessor relationship. If there is no predecessor relationship, then the order doesn't matter. No timestamps.
Rule number 3: A record is identified only by its type, contents, and set of predecessors. If two records have the same stuff in them, then they are the same record. No surrogate keys.
Following these rules, analyze your problem domain and build up a model. The immutable records in that model form a directed acyclic graph, with arrows pointing toward the predecessors. Send those records to the edge nodes and let them make those millisecond decisions based only on the records that they have on hand. Record their decisions as new records in this graph, and send those records back.
Jeff Doolittle and I talk about this system on a recent episode of Software Engineering Radio: https://www.se-radio.net/2021/02/episode-447-michael-perry-o...
No matter how you store it, treat data at the edge as if you could not update or delete records. Instead, accrue new records over time. Make decisions at the edge with autonomy, knowing that they will be honored within the growing partially-ordered history.