When trying to install module via authorize.php
, the following error occured to me, along with its code line and the data in question:
Notice: unserialize(): Error at offset 221 of 225 bytes in .../includes/batch.queue.inc on line 27
$item->data = unserialize($item->data);
a:2:{i:0;s:35:"update_authorize_batch_copy_project";i:1;a:4:{i:0;s:5:"devel";i:1;s:13:"ModuleUpdater";i:2;s:75:".../sites/default/private/temp/update-extraction/devel";i:3;O:15:"FileTransferSSH":5:{s:11:"
So the serialized object is truncated somehow. It turns out that serializing private and protected class members involves including null character:
Note: Object's private members have the class name prepended to the member name; protected members have a '*' prepended to the member name. These prepended values have null bytes on either side. (PHP: serialize).
And here is the relevant object and its serialization where some strange character is replaced with %:
array(2) {
[0]=>
string(35) "update_authorize_batch_copy_project"
[1]=>
array(4) {
[0]=>
string(5) "devel"
[1]=>
string(13) "ModuleUpdater"
[2]=>
string(75) ".../sites/default/private/temp/update-extraction/devel"
[3]=>
object(FileTransferSSH)#17 (5) {
["username:protected"]=>
string(3) "ogi"
["password:protected"]=>
string(7) "password"
["hostname:protected"]=>
string(9) "localhost"
["port:protected"]=>
int(22)
["jail"]=>
string(24) "..."
}
}
}
object(InsertQuery_pgsql)#30 (9) {
["table:protected"]=>
string(5) "queue"
["delay:protected"]=>
NULL
["insertFields:protected"]=>
array(3) {
[0]=>
string(4) "name"
[1]=>
string(4) "data"
[2]=>
string(7) "created"
}
["defaultFields:protected"]=>
array(0) {
}
["insertValues:protected"]=>
array(1) {
[0]=>
array(3) {
[0]=>
string(17) "drupal_batch:13:0"
[1]=>
string(381) "a:2:{i:0;s:35:"update_authorize_batch_copy_project";i:1;a:4:{i:0;s:5:"devel";i:1;s:13:"ModuleUpdater";i:2;s:75:".../sites/default/private/temp/update-extraction/devel";i:3;O:15:"FileTransferSSH":5:{s:11:"%*%username";s:3:"ogi";s:11:"%*%password";s:7:"password";s:11:"%*%hostname";s:9:"localhost";s:7:"%*%port";i:22;s:4:"jail";s:24:"...";}}}"
[2]=>
int(1264011582)
}
}
It's PHP 5.2.6 and PostgreSQL 8.3.9 on Debian 5.0 Lenny. I guess it has something to do with PostgreSQL and I decided it's worth to raise severity of this bug to critical.
Comment | File | Size | Author |
---|---|---|---|
#114 | 690746-fix-cache-tables-array.patch | 1.53 KB | Stevel |
#111 | 690746-follow-up.patch | 738 bytes | Damien Tournoud |
#105 | 690746.patch | 16.74 KB | Damien Tournoud |
#101 | serialize.patch | 24.68 KB | jbrown |
#97 | serialize.patch | 22.41 KB | jbrown |
Comments
Comment #1
ogi CreditAttribution: ogi commentedConfirmed that the string got truncated in the PHP PDO layer. This is the PostgreSQL log:
Comment #2
ogi CreditAttribution: ogi commentedI checked the source of both PDO pgsql and PDO mysql.
PDO pgsql calls PQescapeStringConn for escaping strings but
so any zero truncates string.
In contrast, PDO mysql calls mysql_real_escape_string for escaping strings and it does encode the NUL character:
For completeness, I checked PDO sqlite and it's like PostgreSQL: sqlite3_snprintf is called and NUL characters are stop signs:
So where should NUL character be escaped (PHP PDO or Drupal) and do all DB allow storing such character in text data type? I don't think PHP developers will accept using logic that is different than native DB API, so this leaves fixing it to Drupal. PostgreSQL should be able to store NUL character but I'm skeptical if SQLite allows such things.
Comment #3
ogi CreditAttribution: ogi commentedThinking a bit about the problem, a nice solution is to invent
drupal_serialize
anddrupal_unserialize
which encode/decode the NUL character to something like'\0'
. These functions make sense only when serialized object is stored in some storage (DB or file).Another possibility is to use BLOB instead of text data type for cases when serialized objects are stored. I don't know if this solves the problem though.
Comment #4
ogi CreditAttribution: ogi commentedI've just changed
data
field ofqueue
table to bebytea
(PostgreSQL's blob) and it works :-) The attached patch is tested with a new Drupal+PostgreSQL installation. Do I need to add schema update to the patch?Comment #5
ogi CreditAttribution: ogi commentedComment #6
ogi CreditAttribution: ogi commentedSecond try for the patch.
Comment #7
ogi CreditAttribution: ogi commentedThird try, against HEAD.
Comment #8
Damien Tournoud CreditAttribution: Damien Tournoud commentedThis is unfortunate. If we have to do this, let's change all the TEXT columns that are used to store serialized data into BLOB, and promote that to a rule.
Promoting to the database layer, we need to fix that for every database engine.
Comment #9
ogi CreditAttribution: ogi commentedChanging to BLOB is required only if serialized data contain objects with protected or private members. If it's just nested arrays or objects with only public members, TEXT is OK.
Comment #10
Crell CreditAttribution: Crell commentedBLOB fields make any sort of searching or indexing or treatment of that field impossible. It also complicates field handling on some databases. I really don't want to go down that route if we can avoid it.
How many "serialized" fields do we have? I know of cache, session, and bits of the Field system.
Oh yeah, and PHP, you suck. PostgreSQL, you suck, too.
Comment #11
Damien Tournoud CreditAttribution: Damien Tournoud commented@Ognyan Kulev: we need to promote clear and simple rules.
"All serialized data should be stored in BLOB fields" is clear and simple.
"All serialized data that might have zero values in them should be stored in BLOB fields, otherwise TEXT is ok" is unclear and confusing.
Comment #12
ogi CreditAttribution: ogi commented@Crell: refusing to store NUL "character" in TEXT column is sensible and I welcome it. And it's natural for such kind of fields (serialized data structures) not to be indexable or processable other than set/get.
@Damien Tournoud: "All serialized data should be stored in BLOB fields" sounds OK to me. I'm willing to make a patch but I just need to known the policy about this schema change: a) should there be e.g. system_update_70xx-like functions and b) should it preserve old contents because my PostgreSQL didn't allow me to ALTER ... ALTER COLUMN but I had to ALTER ... DROP; ALTER ... ADD. I guess the answer to both of these is Yes. Another thing that is unknown to me is the comment "@} End of "defgroup updates-6.x-to-7.x" * The next series of updates should start at 8000." So this update function should be in updates-6.x-to-7.x, isn't it?
Comment #13
mainpc CreditAttribution: mainpc commentedI agree. If it can contain NULL or is just an array of bytes (i.e. raw serialization), then it is binary and should be treated as such by everything. If it is text (and has a specific character encoding), then it should never contain NULL (at least, without escaping). If we need to use text functions against these fields, then we should not be using serialization and should be normalizing the data.
Comment #14
Josh Waihi CreditAttribution: Josh Waihi commentedBlob fields for serialized data makes sense, its serialized therefore there is no need to index or search on such fields. Also for schema updates, dropping cache column to re-create them won't matter since we can rebuild those easily enough.
@Ognyan Kulev because this is a change to Drupal 7, you need to write an update function called system_update_70XX in system.install that alters the columns your changing to blob.
Nice work BTW :)
Comment #15
Crell CreditAttribution: Crell commentedThere are other serialized fields besides cache that we cannot drop, like Field API configuration and sometimes even the variable table.
Comment #16
Josh Waihi CreditAttribution: Josh Waihi commentedserialized columns will have a key from which they index on, we can build an array in PHP from which we can store the data until it can be inserted back into the new blob field. Unless we can do a manual CAST conversion in the db?
Comment #17
Crell CreditAttribution: Crell commentedThat could work, possibly, but I'm concerned about the memory usage of that approach. It would have to be in 3 separate update hooks: Add new column, port data over one record at a time (batched), and delete old column.
Comment #18
ogi CreditAttribution: ogi commentedWith single
UPDATE ... SET column_blob=column::bytea
statement I managed to copy all data from onetext
column to anotherbytea
column in PostgreSQL 8.3. Such statements surely exist in MySQL and SQLite but the problem is that they will be SQL driver specific and system_update_* would have hard-coded (supposedly) 3 different SQL-specific statements. Is this acceptable?Comment #19
Crell CreditAttribution: Crell commentedI'd prefer not to if we can avoid it, but maybe.
For MySQL we can probably just do a straight read since BLOBs are almost TEXT and require no special processing. I've no idea about SQLite, though, other than making schema changes to SQLite tables is a PITA.
Although... No one is running Drupal 6 on SQLite, so do we even need that update hook?
Comment #20
Josh Waihi CreditAttribution: Josh Waihi commentedWell, technically - no. We currently not supporting any version of Drupal that runs on SQLite until Drupal 7.0 is released. Considering the nature, if need be - a per database solution will be fine.
Comment #21
ogi CreditAttribution: ogi commentedI've made a patch and tested it (install without patch, look {queue}, patching, update.php, check if schema is changed and data is still in {queue}) with the following DBs: MySQL 5.0.51a, PostgreSQL 8.3.9, SQLite 3.5.9 (Debian Lenny).
It's hard for me to judge if
{queue}.data
is the only column that needs changing. Only columns that may hold serialized objects need to be changed. Can anybody help with that?Comment #22
Crell CreditAttribution: Crell commentedThe queue table doesn't exist in D6, so it wouldn't need to be changed anyway. :-)
In core, there's the cache tables (which don't need data retained as we flush them anyway), sessions (same), and variable. There's also, and this one is ugly, most of the configuration for fields. That has to migrate from CCK's tables in D6 to core's tables in D7. I am not sure what else is happening there, though, so those may be covered elsewhere already by other upgrade processes.
The tables themselves would need to be updated in the schema either way, even if we don't preserve data.
Comment #23
ogi CreditAttribution: ogi commentedThere's
ALTER TABLE name ALTER [ COLUMN ] column [ SET DATA ] TYPE type [ USING expression ]
statement in PostgreSQL. I'll make a new patch soon and submit it after testing, including with upgrade from D6.Comment #24
Damien Tournoud CreditAttribution: Damien Tournoud commentedAs far as I know, we have everything in place for:
To work on all databases.
Which problem have you bumped into with that, Ognyan?
Comment #25
andypostSubscribe.
Comment #26
ogi CreditAttribution: ogi commentedI now see that there is functionality in
pgsql/schema.inc
for casting on changing fields but unfortunately it gives SQL error - I wonder if it has ever worked because there's no such syntax in PostgreSQL's ALTER TABLE. The error message isSQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "data" LINE 1: ALTER TABLE queue ALTER data SET data = CAST(data_old as byt... ^
and the relevant code is:Anyway, I'm attaching the simplified patch.
Comment #27
Josh Waihi CreditAttribution: Josh Waihi commentedI'm wondering if its a reserved word issue? http://www.postgresql.org/docs/7.3/static/sql-keywords-appendix.html the table says its not but "data" is a reserved word in the SQL standard. That PHP code doesn't make sense to me. Shouldn't it be more like:
or maybe I'm just seeing it out of context
Comment #28
andypostIt's a casting trouble. A working solution I found http://bytes.com/topic/postgresql/answers/174532-cast-text-bytea
Comment #29
ogi CreditAttribution: ogi commented@andypost In my experience and as I wrote above,
UPDATE forum_gtree SET gid2=gid::bytea;
works fine in 8.3. The thread you point is from 2005 and perhaps things were different then.Comment #30
andypost@Ognyan Kulev anyway we should choose a method for altering
1) add new column, copy data, drop old column
2)
alter table queue alter data type bytea using data::bytea;
Both this changes require some decoding table to generate type-transformation
Comment #31
andypostTagging, tests for serialization also needed at #718636: Cache does not unserialize serialized data in certain conditions which already fixed
Comment #32
jbrown CreditAttribution: jbrown commentedThis problem can also be triggered in MySQL like this:
resulting in
This definitely needs to be fixed before D7 is released so we are not making major schema changes after.
I would be very interested to see the performance implications of a drupal_serialize() that wrapped gzdeflate() around serialize().
Comment #33
MichaelCole CreditAttribution: MichaelCole commentedWriting test today.
Comment #34
MichaelCole CreditAttribution: MichaelCole commentedAdditional crash cases:
// Crash cases:
// 1)
// drupal_set_message("This freaky data is not a crash: " . hash('md5', 'lorem ipsum', TRUE));
// drupal_exit();
// 2)
// variable_set("dontcrash", "This freaky data is not a crash: " . hash('md5', 'lorem ipsum', TRUE));
// 3)
$name = "dontcrash";
$value = "This freaky data is not a crash: " . hash('md5', 'lorem ipsum', TRUE);
db_merge('variable')->key(array('name' => $name))->fields(array('value' => serialize($value)))->execute();
Comment #35
MichaelCole CreditAttribution: MichaelCole commented(New patch. Patch in #21 only covered queue table.)
What's the problem?
serialize($binarydata) will fail when inserted into a 'text' field. For example:
What's the solution?
Convert:
'type' => 'text',
to:
'type' => 'blob',
'size' => 'big', // if data is bigger than 16k (mysql), make big
What fields were changed (current size)?
variable.value (big)
sessions.session (big)
actions.parameters (big)
batch.description (big)
menu_router.load_functions (16k) - comments say serialize
.to_arg_functions (16k) - comments say serialize
.access_arguments (big)
.page_arguments (big)
menu_links.options(big)
queue.data (big)
system.info (big)
What fields were skipped?
cache.headers (assumed cache knew what it was doing)
menu_router.description (description, not a serial)
menu_router.include_file (filename, not a serial)
Test plan:
Tell yer testbot to put that in it's pipe and smoke it!
Comment #37
cleaver CreditAttribution: cleaver commentedI reviewed the test at the code sprint and it looks good. The patch does apply successfully and the test does not fail locally.
Note, this is a special test case. The patch is to the system.install, so the patch must be applied BEFORE installing Drupal--ie. while the database is still empty.
Testing manually gets the desired result -- test fails if the code patch is not applied. The test succeeds if installing using the patched system.install code.
Not sure how to get this so that testbot will like it.
Comment #38
MichaelCole CreditAttribution: MichaelCole commented"The test did not complete due to a fatal error. Completion check text.test 73 TextFieldTestCase->testTextfieldWidgets()"
Hmmm... What's up with that testbot?
Tests pass on my local dev machine.
- updated CVS
- ran just textfield, then all field tests
Reviewed test for impact, and didn't see anything that should have caused the test to fail to run.
Line 73 is a single "test" function that runs two large tests (text and text_long). Resubmitting patch with fix to break these two tests into two test functions.
We'll get more info on next run, or see if it's an intermittent test harness problem.
Comment #39
MichaelCole CreditAttribution: MichaelCole commented#35: serialize.patch queued for re-testing.
Comment #40
MichaelCole CreditAttribution: MichaelCole commentedThis new patch is the SAME CODE, but with a slight modification to the failed test to help with debugging. Test passes on my local computer, but did not complete due to "fatal error" on testbot. This patch will help isolate that error if testbot continues to fail.
Comment #41
MichaelCole CreditAttribution: MichaelCole commentedPreviously reviewed and approved: serialize.patch
serialize.patch failed on testbot problem. Resubmitted to testbot and passed. See #35
serialize2.patch is the SAME CODE, but added some debug statements to help diagnose testbot failure. This patch can be IGNORED.
Comment #42
cleaver CreditAttribution: cleaver commentedMichael, I think the last successfully tested patch will be the one applied.
Might want to re-upload the patch from #35 to be sure.
Comment #43
cleaver CreditAttribution: cleaver commentedForgot that it changes the status... changing back, this patch is good.
Comment #44
Crell CreditAttribution: Crell commentedCan someone confirm please that we don't need a complete definition array for the update function? I think we do actually need a complete definition with null and description keys.
Also, queue is new in Drupal 7 so we don't need to update that one.
Comment #45
catchTagging.
Comment #46
jbrown CreditAttribution: jbrown commented#35 There are serialized fields in other core modules.
Comment #47
jbrown CreditAttribution: jbrown commentedTry this.
It converts all the fields in #46 to 'blob'.
Fields that store serialized data now have 'serialize' => TRUE even if they are not written with drupal_write_record() .
drupal_write_record() will only serialize data for 'blob' fields.
Comment #48
andypostConfirm that db_change_field() should use full field definition. For example http://api.drupal.org/api/function/block_update_7003/7
Also all field constraints should be dropped (if any) http://api.drupal.org/api/function/db_change_field/7
Comment #49
jbrown CreditAttribution: jbrown commentedI implemented the full field definition for the update functions. This is required because the mysql driver uses ALTER TABLE {' . $table . '} CHANGE which needs the same full column definition as CREATE TABLE: http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
serialized fields don't have any constraints
Comment #50
andypostReally needs a bit of work...
trailing whitespace
same 4 times
+1 to RTBC after re-roll
Comment #51
jbrown CreditAttribution: jbrown commentedUpgrading from D6 needs to be tested with mysql and postgres.
Comment #52
jbrown CreditAttribution: jbrown commented@andypost: I don't know what trailing whitespace you are referring to.
Comment #53
jbrown CreditAttribution: jbrown commentedFix testDrupalWriteRecord() .
Comment #54
andypost@jbrown please try to review your patch with dreditor and you'll see what I'm talking about, there should not be spaces on blank lines. Dreditor will show them as red colored
this is a trailing whitespace
Powered by Dreditor.
Comment #55
jbrown CreditAttribution: jbrown commentedThanks @andypost !
Comment #56
andypostNow it's good to go
Comment #57
jbrown CreditAttribution: jbrown commentedNot quite yet!
Upgrading from D6 needs to be tested with mysql and postgres.
Comment #58
jbrown CreditAttribution: jbrown commentedComment #59
jbrown CreditAttribution: jbrown commentedThis patch fixes creation of queue table during upgrade.
Todo: alter cache tables during upgrade.
Comment #60
catchWe can drop and recreate cache tables, should be quicker than altering them if they have stuff in them. Modules which maintain their own cache table are going to have to do this too, so this will have to be documented in the 6-7 upgrade guide.
Comment #61
jbrown CreditAttribution: jbrown commentedAre cache tables currently dropped during D6->D7 upgrade? Is there an issue for this?
Comment #62
jbrown CreditAttribution: jbrown commentedUpdated patch with cache tables dropped and recreated.
Comment #63
Crell CreditAttribution: Crell commentedI'm confused. Why are we converting to use drupal_write_record() more? Blob fields are handled transparently by the DB drivers.
I am also not wild about the serialized flag in schema. That's not information about the table; it's information about how Drupal may or may not use those fields. I am not convinced those belong together; to date the schema is almost Drupal-agnostic in its definitions.
Comment #64
jbrown CreditAttribution: jbrown commentedThe main purpose of this patch is for serialized variables to be reliably stored in the db.
The two places where I have used drupal_write_record() turn 45 lines into 2 lines and make the code much cleaner.
This patch does not introduce the 'serialize' schema option - whether or not that is a good idea is a separate discussion. Although, perhaps this patch should not have 'serialize' for fields that are not written by drupal_write_record().
Comment #65
andypost@Crell This patch about storing serialized data and not about handling blob fields.
Postgres could not be tested until #761976: DatabaseSchema_pgsql::changeField() is broken
Patch seems good
Comment #66
YesCT CreditAttribution: YesCT commented#62: serialize.patch queued for re-testing.
Comment #68
Crell CreditAttribution: Crell commentedHrm. OK, that's what I get for reading patches too quickly. Yeah, serialize is already there. It sucks, but we can't change that now.
It looks like the patch just broke on a change to the schema of cache tables, so here's a simple reroll. Bot, you cool?
Comment #70
Crell CreditAttribution: Crell commentedGrumble grumble duplicate function names grumble grumble. I think I got them all.
Comment #72
Crell CreditAttribution: Crell commentedI guess I didn't. *sigh*
Comment #73
dhthwy CreditAttribution: dhthwy commentedYou've got a few comments past the 80 character limit. Plus your comments for functions need to begin with a 3rd person verb as far as I understand (ie: s/convert/converts).
Don't hate me tho. I thought I'd point that out before aspilicious and sun jump you and steal your candy.
Comment #74
Damien Tournoud CreditAttribution: Damien Tournoud commentedEm. Let's not do that. If I set serialize, I want the data serialized. End of story. The fact that some databases cannot handle it in some cases is totally irrelevant.
Other then that, I see no compelling reasons to move to drupal_write_record() in those places. Let's only fix one bug at a time, please remove those. This issue is *only* about converting text columns to blob column, and adding a test.
Speaking of which:
This:
I vote for just removing it.
Comment #75
jbrown CreditAttribution: jbrown commentedImplemented recommendations in #74.
For {cache_block} and {cache_update} I just modified the update functions that drop the headers field. Those functions now just drop and recreate the tables.
Is it okay to use drupal_get_schema_unprocessed('system', 'cache') in an update function?
Comment #76
Damien Tournoud CreditAttribution: Damien Tournoud commentedThanks for the new patch, we are getting there. Some additional remarks:
'serialize' => TRUE
entries in the schemas.Comment #77
jbrown CreditAttribution: jbrown commentedDone.
I don't think we can use drupal_get_schema_unprocessed() in an update function for the reasons discussed here: http://drupal.org/node/150220
Comment #78
Josh Waihi CreditAttribution: Josh Waihi commentedAgreed, I don't think you can use drupal_get_schema_unprocessed()
Comment #79
jbrown CreditAttribution: jbrown commentedThe main blocker is #761976: DatabaseSchema_pgsql::changeField() is broken
Comment #80
chx CreditAttribution: chx commentedYou can't ever use existing schema in the code in the update because the the schema in hte code might change and then your update lands you in a different place and various users will have a different schema with the same schema version which is a terrible mess. Write out the definition.
Comment #81
jbrown CreditAttribution: jbrown commentedYeah - I'll reroll this with the definition once the blocker is resolved.
Comment #82
catchWhy is the postgres issue a blocker?
Comment #83
jbrown CreditAttribution: jbrown commentedSee #65
db_change_field() needs to work on postgres
Comment #84
Damien Tournoud CreditAttribution: Damien Tournoud commentedI agree, nothing blocks this issue. The issue with PostgreSQL schema implementation doesn't block testing of this on MySQL and SQLite.
Comment #85
jbrown CreditAttribution: jbrown commentedI wasn't aware that we were breaking HEAD for postgres.
Comment #86
catchHEAD is already broken for postgres, we don't support head to head updates (see http://drupal.org/project/head2head), and we already have db_change_field() calls in core - so this patch doesn't cause any harm that's not already there.
Comment #87
andypost@catch As reported at #32 mysql affected too!
Comment #88
Damien Tournoud CreditAttribution: Damien Tournoud commentedPlease move ahead with this patch. There is nothing blocking it. PostgreSQL doesn't implement the API correctly, and this problem is dealt with in the other issue. Nothing prevent us to move forward here.
Comment #89
jbrown CreditAttribution: jbrown commentedI haven't actually tested this yet on a D6=>D7 upgrade.
Comment #90
jbrown CreditAttribution: jbrown commentedAlso see #831332: drupal_get_schema_unprocessed() called in update functions.
Comment #91
jbrown CreditAttribution: jbrown commentedHang on - the each cache table has its own description.
Comment #92
jbrown CreditAttribution: jbrown commentedComment #93
catchThis is longer than 80 chars and we rarely reference issues in code comments.
Just "Convert text to blob since this stores serialized data, remove headers column." should be fine I think.
Leaving CNR for the bot and apart from that it looks fine.
Comment #94
jbrown CreditAttribution: jbrown commentedI'll run it through a D6=>D7 upgrade tomorrow and check it with the schema module.
Comment #95
Damien Tournoud CreditAttribution: Damien Tournoud commentedLooks good to me too.
Comment #96
jbrown CreditAttribution: jbrown commentedFixes foreach typo in system.install
Comment #97
jbrown CreditAttribution: jbrown commentedUpdated comments.
Comment #98
Damien Tournoud CreditAttribution: Damien Tournoud commentedCarefully reviewed the patch. Everything we need seems to be there.
Comment #99
catchOpened head2head issue ahead of time #832124: #690746 - Text column type doesn't reliably hold serialized variables.
Comment #100
rfaysubscribe
Comment #101
jbrown CreditAttribution: jbrown commentedFixed schema upgrade code in the image module.
Comment #102
andypostThis changes are correlate with #831332: drupal_get_schema_unprocessed() called in update functions
Maybe better to sync changes in cache tables by producing one update function.
Comment #103
Stevel CreditAttribution: Stevel commented#831332: drupal_get_schema_unprocessed() called in update functions got committed, meaning this patch doesn't apply any more.
Why are the cache tables droppend and recreated instead of doing a db_change_field like on the other the converted fields?
@andypost: I've posted an idea for achieving that (and a reason) in #837342: Simultaneous updating required for modules with a cache bin when the cache schema changes.
Comment #104
andypost@Stevel The reason to drop/create cache tables is because cache tables could be big and once this change happens in _update_N so cache should be cleared anyway. So actually no reason to keep data in cache tables. Contrib modules that have cache tables should care about cleaning and tracking changes in cache tables.
Comment #105
Damien Tournoud CreditAttribution: Damien Tournoud commentedFull reroll.
I kept the drop / recreate pattern for cache tables, which simplify everything.
This looks ready to go in for me.
Comment #106
catchLooks ready to me too, would mark RTBC but I haven't tested it at all.
Comment #108
Damien Tournoud CreditAttribution: Damien Tournoud commented#105: 690746.patch queued for re-testing.
Comment #109
Damien Tournoud CreditAttribution: Damien Tournoud commentedRTBC based on #106 and my previous attempt on #98.
Comment #110
Dries CreditAttribution: Dries commentedCommitted to CVS HEAD. Thanks.
Comment #111
Damien Tournoud CreditAttribution: Damien Tournoud commentedSorry about that.
Comment #112
catchComment #113
Stevel CreditAttribution: Stevel commentedno, all cache tables have their own descriptions, so the array definition which holded these definitions is gone here, I've seen it somewhere recently, looking up where.
edit: found it in #101:
Then we need $schema['description'] = $description in each run of the for loop.
Comment #114
Stevel CreditAttribution: Stevel commentedNew patch that sets the correct description for each cache table.
Comment #115
catchComment #116
Dries CreditAttribution: Dries commentedCommitted to CVS HEAD. Thanks.