The project application process attempts to educate new contributors on security best practices, but our research has found that it doesn't actually do this:
We plan to offer the quiz only after someone has created a project with a release.
If the quiz is failed, retaking can only happen after a waiting period, so you can’t click all the things until you pass. Tentatively, this period might be 30 days.
Passing is 85%
45 minute time limit
The quiz is 30 questions chosen randomly from a larger pool, to represent 5 categories.
Categories are
- SQL Injection (A good way to query the DB is …)
- XSS (A good way to output text is …)
- CSRF (Taking an action should require …)
- General secure coding (Can system() be called safely?)
- Drupal.org and Security Team policies (All code checked into Drupal.org Git repositories should be …)
Types of questions will include
The pool of questions will be written by the security team, including provisional members. Questions will be periodically reviewed & refreshed.
A study guide will be drafted, most of the material will be links to existing documentation and policies.
Comments
Comment #1
webchickAsked the code review team for feedback at http://groups.drupal.org/node/178364
Comment #2
gregglessubscribing.
Comment #3
sdboyer commentedsub. interesting idea.
Comment #4
dwwI'm tentatively in favor of such a quiz but I'm not sure it should happen *before* the Git access checkbox. We really want it before you're trying to promote a sandbox to a full projects, no? I.e. have a bit set in your profile about if you've passed the test or not that's used in conjunction with the permissions about sandbox promotion to determine if you're actually allowed to do it.
My other main concern about the quiz is generating good questions that a) can catch most of the sorts of problems we're trying to catch, b) actually encourage people to learn about instead of just guess, and c) the answers to which aren't going to end up on an easily-googled list somewhere. If you fail the test, is there a delay before you can take it again? What stops you from just trying over and over until you guess right? We should have at least some fill-in-the-blank instead of just multiple choice to help slow down the pure guessing approach.
Another thought: does your "drivers license" ever expire and you need to re-take the driver's test? Does this bit decay every few years? Every new major stable release of core?
Comment #5
gregglesI don't think we should require a particular flow in terms of this being before or after the git checkbox. If someone wants to take the quiz early they can, if later, they can. Taking the quiz should be a requirement before they creae a full project.
I think we can develop a test that meets most of those goals you have in your second paragraph. We can also try to encourage people not to cheat by simply googling ourselves for the questions/answers and, if they show up, we would shut down the test until we can write a new one.
I don't think we should force people to re-take the test. But we could include "passed Drupal Security test 1.0" on people's profile and then it becomes a source of pride to pass 1.1 and 2.0 versions.
Comment #6
dwwRe: flow: That's what I meant -- it's a separate thing you have to do which you can do whenever you want. However, you can't actually promote a sandbox until your "I passed the test" bit is set, so that's going to likely be the first time people discover they have to do this step when they hit a helpful error message about it.
My main point is we don't want that error message before they can click the "git checkbox" (as mentioned in the original post) but before they try to promote a sandbox.
Comment #7
webchickSure, that's fine. The order of operations is less important. Was trying to spread out the hurdles a bit.
The important thing is it happens before applying for full project perms.
Comment #8
webchickFeedback from the code review thread so far on possible topics:
- Git branching/tagging, and d.o branching/tagging conventions
- Coding standards (hopefully not needed with automated Coder module checks, but some general education that these exist is probably a good thing)
- t() and other text sanitization functions; when data is safe—l()—and when it isn't—$node->title.
- Access checking
- SQL injection escaping (both D6 and D7, since they're different; and if I can interject, the proper way to do node_access queries is a big one)
Candidates for Coder automation:
- Checking for $Id$ in files
- Checking for existence of README.txt
- Checking for non-existence of LICENSE.txt
Comment #9
patrickd commentedsub.
Comment #10
mitchell commentedA roundabout point about what I think the available benefits with this are:
The two biggest problems I've identified with the review process, while trying to ramp up my own involvement (note: I don't review large swaths of projects, only ones that match search terms that I'm interested in), is that the #1671488: Project Application Checklist is not a checklist, which is a probable cause for common mistakes, and that reviewers aren't using a prescribed (or if so uniformly useful) set of standards, which actually I think is okay, but it should be better discussed in order to codify something more reproducible by those who are trying to follow the leads from more experienced reviewers.
I spoke to mlncn about different reviewers' standards and asked what his personal priorities to check for are. He mainly wants the person to have the knowledge they need to be a good maintainer. (Mine is that applicants are informed about ongoing related issues, workgroups, similar projects, available mentors, and community practices.) Others are gung-ho about applicants having every automated test pass. (What are yours?) Though I disagreed with him when he said that applicants need to be responsive when questioned by their peers (in reference to 'needs work' for # weeks leads to "abandoned", 'won't fix'), his justification that it would indicate that the person would be a good maintainer overall seemed like a valuable trait to review and/or foster through the process. This lead me to believe that most of the data points currently offered/collected/evaluated in the application process don't give much indication or benefit for the desired lesson people need to have learned or characteristic cultivated (see also #1410826: [META] Review bonus). Take as an example, klausi's final message when approving an app, "...As you continue to work on your module, keep in mind: Commit messages - providing history and credit and Release naming conventions." I think klausi is offering great information and doing so with a motivation similar to mlncn's, but is this when they'd be more receptive to that message or a message like, come hang out in #drupal-contribute, because I think the second would have a much bigger impact on helping this newly accepted maintainer develop his/her skills. (Excitement + Social Gathering = Fun Contributing)
I think a quiz would be particularly useful for teaching these topics, ie "Is a correct commit message for @Eddie's patch in issue [#123] this or that?", and also to interweave a couple other particular community practices and skills, ie "How did Frederica fix issue [#123]: did she a) update this handbook page b) roll back a revision to that handbook page or c) add a new handbook page" (information that shows they're learning community practices and reading the important docs pages). I'll try to come up with at least a couple more questions about how to be a good maintainer that will make people think, so that others can finish it out. ...At least for me, this seems to make a lot of sense, because I never cared too much that someone was telling me read this or follow that.. you'll be tested on it! I just waited for the test to come, and figured out what the teacher wanted me to know based on the topic and her motivations in order to figure out the answers... That and I have always been a fan of collaboration and learning tools used at just the right moment ;-)
Responding to some of the things mentioned so far: I am a -1 for it being used to "test" someone's knowledge of PHP, Drupal * API, git, etc. I find those tests boring and harder for the tester to create than the testee to benefit from. My biggest reason, and criteria in general, is that I don't think this would be helpful to most applicants and would just feel like it was a waste of their time. I am also a -1 for making it required or part of a workflow, because some people already know this stuff and their reviewers / mentors / sign-off-erators will recognize them. Not only that, but if we actually make it good (really!), then people might actually want to go out of their way to do it even if they don't have to. It can be the start of some constructed d.o 'activities' other than coding and socializing.
TL;DR: Use this as a tool to help train people, not screen them.
Comment #11
patrickd commentedmaybe we should keep such discussions on http://groups.drupal.org/code-review to reach more people about this issue
my personal thoughts about your points:
In the meantime I have to agree that a quiz as we are currently discussing about, wouldn't solve the actual problem (and I'm pretty sure the only problem is just the steady shortage of reviewers and mentors).
I think that the tools and documentation that we give reviewers and applicants is pretty much good as it is. They just - need to read it. I don't think that it's a good idea too short documentation and loose useful information in order to make it more attractive to read.
In my experience there are either applicants who only read documentation to the point where to create that stupid application issue and there are others who continue to read through most of it. Shorting it would not make a big difference here.
Surely all reviewers have their own personal priorities when reviewing other applications, though it is clear that the really blocking issues are security, duplication and licensing. I don't think reviewers should have the right to block an applicant because he was idle for some weeks or his coding style is odd in the opinion of the reviewer. Surely he can tell the applicant all his nitpicking opinions and please him to fix them - but don't forget that as long as security, duplication and licensing is not the problem you should leave the applicant his freedom to write code the way he wants.
Comment #12
mgiffordThere are lots of good things that can come from asking questions of Drupal users.
I'd like to see us use Webform to start making this easier.
It wouldn't need to be a quiz were people feel like they need to pass, but I do see that it could help educate people about best practices in git or at least steam-line the on-boarding process.
I added some related issues.
Comment #14
mlhess commentedUpdating the issue sumary here with the action plan.
Comment #15
mlhess commentedComment #16
gisleThis seems to be pretty dead in the water.
However, I want to throw my two cents and express agreement with the problem description in the issue summary, viz:
Some years back, I used to participate as a reviewer. However, I stopped doing it, as I experienced that it somehow devolved into something that was no longer a meaningful onboarding exercise for new contributors, and instead became dominated by an almost obsessive focus on petty coding standards. I am a big fan of coding standards, but not to the exclusion of everything else.
A case in point is the recently approved #3135050: [D8] Hover Effects.
The application seems to consist almost in entirety of a JavaScript library ripped off (without acknowledgement) from a Github account (see:
https://github.com/gudh/ihover/blob/gh-pages/src/ihover.css and compare it to https://git.drupalcode.org/project/hover_effects/-/blob/1.x/css/ihover.css).
There are also boilerplate versions of
hook_help()andhook_page_attachments(), but that is about it. There is not anything relevant to:(See this page about what is supposed to be the purpose of the vetting process.)
The also is breaks our Drupal Git Contributor Agreement & Repository Usage Policy:
Why the people that still participate in this process approves a project that seems to be irrelevant to the goals of the PAReview process (including demonstrating the ability to write secure code) and in addition blatantly violates our Git Repo use policy without as much as pointing this out, is beyond me. But as I already said, I am out.