You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tables are exhibiting slow performance on modest datasets (1000+ rows). This becomes particularly apparent with telemetry that has large data objects, such as Yamcs "Events" data.
The code will first create a shortlist of candidate duplicates based on timestamp alone. It will then do a more exhaustive search using _.isEqual on this candidate list to find genuine duplicates. From performance profiling I am seeing this function being called with extremely high frequency (see screenshot below), which is unexpected.
The first thing to do is to just confirm that the code that creates the shortlist of potential duplicates is working as expected and that it's not accidentally scanning over the entire telemetry set, because it's surprising to see the duplicate check running so regularly. If this is not the issue then we need to consider whether there is a more performant method of detecting duplicates than _.isEqual.
Impact Check List
Data loss or misrepresented data?
Regression? Did this used to work or has it always been broken?
Is there a workaround available?
Does this impact a critical component?
Is this just a visual bug?
The text was updated successfully, but these errors were encountered:
Summary
Tables are exhibiting slow performance on modest datasets (1000+ rows). This becomes particularly apparent with telemetry that has large data objects, such as Yamcs "Events" data.
The issue appears to be in the check for duplicate values.
The code will first create a shortlist of candidate duplicates based on timestamp alone. It will then do a more exhaustive search using
_.isEqual
on this candidate list to find genuine duplicates. From performance profiling I am seeing this function being called with extremely high frequency (see screenshot below), which is unexpected.The first thing to do is to just confirm that the code that creates the shortlist of potential duplicates is working as expected and that it's not accidentally scanning over the entire telemetry set, because it's surprising to see the duplicate check running so regularly. If this is not the issue then we need to consider whether there is a more performant method of detecting duplicates than
_.isEqual
.Impact Check List
The text was updated successfully, but these errors were encountered: