#17415 closed Bug (fixed)
django.contrib.sites.management.create_default_site populates invalid data in DB
Reported by: | Owned by: | nobody | |
---|---|---|---|
Component: | contrib.sites | Version: | dev |
Severity: | Release blocker | Keywords: | site admin create db orm |
Cc: | anssi.kaariainen@… | Triage Stage: | Accepted |
Has patch: | yes | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description (last modified by )
With a PostgeSQL backend, in trunk revision 1.4 pre-alpha SVN-17189
During syncdb, in the function django.contrib.sites.management.create_default_site,
the default Site 'exemple.com' is created.
But for some reason, the postgres sequence 'django_site_id_seq' is then in an incorrect state.
The last value = 1 (this is ok)
but the flag "Will increment last value before returning next value (is_called)" = No
And this parameter is different in any other table generated by the ORM.
Name Start value Last value Increment by Max value Min value Cache value Log count Can cycle? Will increment last value before returning next value (is_called)? django_site_id_seq 1 1 1 9223372036854775807 1 1 1 No No
What happens then?
If you go to the admin and want to add a second Site, when u save, you get the exception:
Exception Type: IntegrityError duplicate key value violates unique constraint "django_site_pkey" DETAIL: Key (id)=(1) already exists.
At this point, the add in the DB failed, but the ORM just changed the flag in the Sequence django_site_id_seq.is_called = Yes
This will allow the next attempt to save this second Site (reload the page and resend the POST data, by exemple) to succeed.
How to reproduce the bug at this stage:
in the postgresql, delete the table django_site, run manage.py syncdb, this will create a new table and repopulate the exemple.com default Site, and the buggy is_called tag on the django_site_id_seq sequence.
Then, i dont know why the function django.contrib.sites.management.create_default_site is putting this tag to False.
I havent tested on mysql, andi dont know how increment index is working there.
Attachments (2)
Change History (15)
comment:1 by , 13 years ago
Description: | modified (diff) |
---|---|
Type: | Uncategorized → Bug |
comment:2 by , 13 years ago
Triage Stage: | Unreviewed → Accepted |
---|
comment:3 by , 13 years ago
The query sent to PostgreSQL is:
sql: 'INSERT INTO "django_site" ("domain", "name") VALUES (E\'foo.com\', E\'Foo\') RETURNING "django_site"."id"' params: (u'foo.com', u'Foo')
comment:4 by , 13 years ago
This is reproducible with any model: MyModel.objects(pk=1, **kwargs).save()
followed by MyModel(**kwargs).save()
will produced this error.
by , 13 years ago
Attachment: | 17415-testcase.diff added |
---|
comment:5 by , 13 years ago
Component: | contrib.sites → Database layer (models, ORM) |
---|---|
Severity: | Release blocker → Normal |
Summary: | django.contrib.sites.management.create_default_site populates invalid data in DB → Creating an object with an explicit primary key value causes the next creation to fail under PostgreSQL |
And here's the same sequence in a PostgreSQL shell:
django=# INSERT INTO "testapp_dummy" ("id") VALUES (1); INSERT 0 1 django=# INSERT INTO "testapp_dummy" ("id") VALUES (DEFAULT) RETURNING "testapp_dummy"."id"; ERROR: duplicate key value violates unique constraint "tz_app_dummy_pkey" DETAIL: Key (id)=(1) already exists. django=# INSERT INTO "tz_app_dummy" ("id") VALUES (DEFAULT) RETURNING "tz_app_dummy"."id"; id ---- 2 (1 row) INSERT 0 1
To sum up:
- under PostgreSQL, if you create a new object where you force the primary key, and then create another object where you leave the auto-generated primary key, the second operation will fail; this is the root cause of the bug;
- this problem has probably existed for a long time, but is more likely to bite users on Site objects since #16353;
- even though it's annoying, I wouldn't qualify this as a release blocker, as the workaround is trivial and intuitive: just retry.
I'm changing the title and setting flags to reflect this.
comment:6 by , 13 years ago
I just saw this ticket go past as part of the 1.4 rundown, and it occurred to me that the fix is obvious, and has precedence in Django's source tree. Once you've manually created *any* object with a primary key, you need to invoke the SQL generated by the sqlsequencereset management command. This finds the largest PK value, and sets the sequence to select the next available PK value. This is an issue on Postgres and Oracle, because they both use sequences for autogenerated PKs; MySQL and SQLite both use "largest PK + 1" internal mechanisms, IIRC.
We already do this as part of fixture loading. Loading a fixture involves manually specifying a whole lot of primary key values; one of the last things the loaddata command does before committing is to reset the sequences. This ensures that the next object with an autogenerated PK value will have a valid PK value.
From my reading, this bug would have existed for as long as the sites framework has existed; although it would have been masked on any site that loaded fixtures (since loading a single fixture would have reset the sequence as a side effect).
comment:7 by , 13 years ago
Severity: | Normal → Release blocker |
---|
Prior to r16868 create_default_site
did not specify the pk when adding default site object, so I think this sequence reset behavior is new with that changeset. Moving back to release blocker since it is a regression (which doesn't imply it needs to be fixed before alpha, necessarily, but I do think we should fix it before 1.4 final).
comment:8 by , 13 years ago
Component: | Database layer (models, ORM) → contrib.sites |
---|
Also switching back to sites component; I believe fixing the general issue at the ORM level (detecting specified PKs and resetting sequences) is out of scope for Django, but in this case where the sites code is specifying a PK we should ensure the sequence value is correct afterwards.
comment:9 by , 13 years ago
There are some concurrency problems when setting the sequence to "largest PK + 1". The problem is that you will not see objects created in other concurrent transactions, and thus largest PK is not guaranteed to be actually the largest PK. This could cause subtle bugs where everything seems to work, except in rare concurrent cases where it doesn't. This would be very hard to debug. It is worth considering that creating an object with autofield PK set should be an error, except if force_insert is defined, or in raw mode (that is, in fixture loading). Document that if you use force_insert for a model with set autofield PK, then you are responsible for resetting the sequence (or do that automatically in force_insert case?).
The underlying problem is that on SQL level, you should not insert your own values to a column which gets its value from a sequence. It is OK to do this in non-concurrent setting (as in restore from backup, load fixtures). You should not expect it to work in concurrent setting. If Django automatically resets the sequence, it makes it look like it is OK to insert values to a serial field. But it is not. Unless you lock the table for inserts. The whole idea of sequences is that they allow concurrent inserts to a table without using max(id) + 1, which just doesn't work. (SQLite has one transaction at a time, so there are no concurrency problems, MySQL probably does some tricks under the hood, so that it can get the actual largest id even in concurrent cases).
Consider this case, with two transactions T1 and T2, and id_seq starting from 10. Largest ID in the table is 10:
T1: begin; T1: Obj.save() # gets nextval('id_seq') => 11 T1: Obj.save() # gets nextval('id_seq') => 12 T2: begin; T2: Obj(pk=1).save() # where pk=1 does not exist T2: reset_sequence('id_seq'): finds the largest visible id: 10 sets sequence to 11. T1: commit; T2: commit;
Now you have: largest id in table: 12, id_seq starts from 11.
Yes, this is a rare condition. But I believe Django should not expose this possible concurrency problem. So, I suggest this is fixed by "disallow insert into autofield with user defined value, unless it is asked explicitly". If reset_sequence is going to be used, then also lock the table (reset sequence should probably do this in any case...).
comment:10 by , 13 years ago
Cc: | added |
---|---|
Has patch: | set |
Having re-read the posts above, it seems I misunderstood what was proposed. I somehow managed to read Russell's comment suggesting that reset should be done automatically in save() if pk is set and there isn't a matching value in the DB. I now see this is not the case. Sorry for the wasted time...
Attached is a first-draft patch which solves the create_default_site() issue by resetting the sequence in the function.
by , 13 years ago
Attachment: | 17415.diff added |
---|
comment:11 by , 13 years ago
Summary: | Creating an object with an explicit primary key value causes the next creation to fail under PostgreSQL → django.contrib.sites.management.create_default_site populates invalid data in DB |
---|
Restoring the original title, per Russell's and Karen's comments.
I could reproduce this issue.
Here's what I did in a shell:
Then I logged in to the admin, tried to create a new Site object, and got this traceback: