Closed (fixed)
Project:
Project Issue File Review
Version:
6.x-2.x-dev
Component:
Code
Priority:
Normal
Category:
Feature request
Assigned:
Reporter:
Created:
22 Nov 2008 at 17:05 UTC
Updated:
14 Apr 2010 at 18:02 UTC
Jump to comment: Most recent, Most recent file
Comments
Comment #1
catchAnd SQLite now :)
Comment #2
Anonymous (not verified) commentedSubscribing
Comment #3
boombatower commentedThis was discussed early on, but as we didn't have access to additional machines wasn't feasible. Now if this is to work we need slaves running the different database. We have enough slaves so should just require a re-setup of slaves and a bit of tweaking in the code to force patch to goto each environment.
How should the report on d.o look like? and should it send after first environment?
Comment #4
dave reidSubscribing. Awesome idea that would be a huge +1 for Drupal development and cross-database compatability.
Comment #5
alexanderpas commentedno PostgreSQL 8.1 and PostgreSQL 8.2 slaves are needed, per #248205: PostgreSQL surge #7: Require PostgreSQL 8.3
Comment #6
catchFor reporting back, maybe something like:
Passed: MySQL 5.0, PostgreSQL 8.3
Failed: SQLite 7923 passes 6 fails 0 exceptions.
Good to get a report back asap I think - patch can not apply, have syntax errors and fail tests on any platform - it's fairly rare it'll pass on some and break on others.
Comment #7
boombatower commentedHunmonk and I feel that we should iron out the rest of the issues before attempt to expand into this area.
Comment #8
dries commentedI agree that this would be super-useful. :-)
Comment #9
dave reidThrowing and idea out there...I feel like it would be useful once we have a multi-env testing to test all patches on a mysql slave first. Only if the patch applies and tests pass on the mysql slave, only then can it be sent to the other non-quite-as-popular database env. That would save processing time if we have fewer pgsql and sqlite slaves than mysql slaves (which I think will be most likely).
Comment #10
catchThat makes a lot of sense to me - especially factoring retests of patches in the queue. It might also simplify some of the report back issues. If we end up needing to test patches on MySQL 5.0 and 5.1, on IIS (?) and other combinations then it'll help us scale in those directions too.
Comment #11
josh waihi commentedsubscribing, keen to see the postgres stuff happen
Comment #12
damien tournoud commentedFor simplicity of implementation, let's keep the principle of "one slave, one environment". We can obviously virtualize several environment on the same server hardware if needed.
Comment #13
josh waihi commentedthis isn't exactly the best work I've done buts what I needed to do make this module module work on postgres
Biggest barrier - If you set a column to be not null, then don't try set its value to null (see patch for details)
also when you have both postgres and mysql support in php, drupal adds a radio button set to choose postgres or mysql, this isn't handled in the code however I kept it commented out because in most cases a testing environment will just have the one possibility.
Also I added host support in postgres because my environment had separate web and database servers
I know there is lots of room for improvement still - this is just a guideline
Comment #14
alexanderpas commented@#13,
actually... we should also test that radio button!
Comment #15
boombatower commentedComment #16
boombatower commentedBrainstorming data structure.
Table: pifr_environment
Fields:
OR
I think it may make more sense to just use constants since the code should probably check to ensure that the client's selected environment type is valid (aka look for mysql, postgres...)
Then I need to add an environment field to the pifr_result table and possibly abstract the summary result in the pifr_file table.
Comment #17
boombatower commentedpifr_file
pifr_file_result
pifr_file_result_detail (rename pifr_result)
pifr_file summary results will move to pifr_file_result.
pifr_file_result:
Then pifr_file_result_detail can hold the detailed results, like it does, but add a pifr_file_result_id field to relate them back to the summary result and environment.
Comment #18
josh waihi commentedso a test environment can have multiple environments in its self such as postgres, mysql (myisam), mysql (innodb) and SQLite, and the testbed willl test a single patch against each environment? sounds like a good idea but will have high load and longer testing times. Could you store the patch -> environment relation on the master (I'm assuming you are already) and setup each slave to support only one environment so in config when you tell it it is a slave it will want to know the environment to support.
OR!!
the master sends a request to the slave saying "hey, test this patch in environment ". So testbeds support all environments but only test in the in the one they are told to test in.
I like the latter idea. boombatower?
Comment #19
catchThe most efficient workflow would be to send patches for testing on MySQL first (or at least, only one environment) - since 99% of patches which fail on other platforms are going to fail on MySQL too going by http://testing.drupal.org/pifr/stats. Only if the patch passes this initial test, does it then get sent to other environments.
in terms of a UI - we only need a pass/fail result on the issue itself - all other information can live on testing.drupal.org. Probably the safest way would be to report failure as soon as there's a failure, but only report a pass when all possible tests have run. This will mean a longer turnaround for passing tests, but the shortest possible turnarounds for failures.
Comment #20
boombatower commentedThe UI sounds good.
I still think each slave should test one thing as there will be a number of configuration issues, as DamZ pointed out you could setup the server with virtual machines.
Each PIFR test client should do a single environment, that it will specify. The environments will be prioritized in order to select mysql first and such and die after one environment fails.
@Josh Waihi: all the patch information is stored on master, the test clients only know the necessary details about the patch they are currently testing.
Comment #21
josh waihi commentedIn terms of usability, it would be helpful to the developers working on a patch, to know which databases the patch failed on, not just one. For example #355255: Adding groups to create new user form and limiting those shown for user registration was passing without the patch (as mainy of the postgres surge was) but failed on postgres, SQLite and MySQL running innodb. We had to test all environments to discover this and it was made clear that this issue was related to transactional databases. Knowing that the tests failed in the same spot on several other databases will help the developers identify a common problem rather than a problem in a single database.
Yes, in the long run, test will take longer for passes and fails but it seems more usable in this context. If we can get 3 or 4 servers for each environement, then that would run things a bit faster.
Also each environment should be able to run Drupal in any other environment so that install page shows the option for Postgres, MySQL and SQLite.
Comment #22
boombatower commentedIt simplifies the code significantly if each server only runs one environment. If we want to double use servers then we should use virtual machines.
Having developers know what environment is the main purpose and to ensure Drupal actually works in all the environments its supports! This is planned and I have already done a bit of design work (although postponed until the end of the month with the project 6.x porting).
We can always get more servers, as of now we have over 12 servers we can use, but not enough manpower to configure and ensure they are all stable. The PIFR 2.x will help a lot with that, so we are waiting. In other words testing on multiple environments should not be a load problem.
Comment #23
josh waihi commentedI can help :)
Comment #24
boombatower commentedNow that PIFT has been ported with project to 6 work can begin on PIFR again.
Created database schema for this and committed.
Comment #25
boombatower commentedComment #26
boombatower commentedThe data structure and other components are complete.
The final stage is the actual review code. Once abstracted, #369467: Abstract review code, then I will simply need to a few review overrides for the three databases to be tested.
Comment #27
boombatower commentedThe abstraction is complete and an initial backend has been created for postgres by Josh Waihi.
Please confirm that this is the list of databases we want to run tests on.
and SQLite (needs someone to write backend) see /client/review/db/db.inc for interface specs and the other two implementations. (not documentation yet, but very simple)
All that is required is:
Comment #28
tstoecklerPostgre SQL 8.3 is required by Drupal 7 as of #248205: PostgreSQL surge #7: Require PostgreSQL 8.3
Comment #29
boombatower commentedOk so we have:
- MySQL 5.0.x (isam)
- PostgreSQL 8.3
- SQLite
- MySQL 5.1.x (isam)
- MySQL 5.0.x (inno)
Edit: update order after discussing with DamZ.
Comment #30
boombatower commentedNow supports the above types (except we need a sqlite backend, but that is separate issue).
Comment #31
c960657 commentedA slave running Windows would also be useful. At lot of things, in particular filesystem-related things, behave differently on Windows (e.g. #323854: DatabaseLog->findCaller() fails on Windows).
Comment #32
boombatower commentedThat will need to be discussed in separate issue for later development. I personally don't use windows and thus will have a hard time testing and the codebase right now will NOT run on windows, as it is simpler by not supporting it due to the command line things run.
It is definitely an idea, but since most servers run linux as well...much lower priority. Please open a specific issue for windows if you wish.
Comment #34
c960657 commentedDone: #771430: Add test bot running IIS + Windows