Contributors mailing list archives
contributors@odoo-community.org
Browse archives
Large Data Files
by "Jerôme Dewandre" <jerome.dewandre.mail@gmail.com> - 20/08/2024 17:26:36Hello,
I am currently working on a syncro with a legacy system (adesoft) containing a large amount of data that must be synchronized on a daily basis (such as meetings).
It seems everything starts getting slow when I import 30.000 records with the conventional "create()" method.
I suppose the ORM might be an issue here. Potential workaround:
1. Bypass the ORM to create a record with self.env.cr.execute (but if I want to delete them I will also need a custom query)
2. Bypass the ORM with stored procedures (https://www.postgresql.org/docs/current/sql-createprocedure.html)
3. Increase the CPU/RAM/Worker nodes
4. Some better ideas?
What would be the best way to go?
A piece of my current test (df is a pandas dataframe containing the new events):
@api.model
def create_events_from_df(self, df):
Event = self.env['event.event']
events_data = []
for _, row in df.iterrows():
event_data = {
'location': row['location'],
'name': row['name'],
'date_begin': row['date_begin'],
'date_end': row['date_end'],
}
events_data.append(event_data)
# Create all events in a single batch
Event.create(events_data)
Thanks in advance if you read this, and thanks again if you replied :)
Jérôme
Follow-Ups
-
Re: Large Data Files
by "Jerôme Dewandre" <jerome.dewandre.mail@gmail.com> - 20/08/2024 23:51:41 - 0 -
Re: Large Data Files
byClosingAp Open Source Integrators Europe, LDA, Daniel Reis