In other words there should be a flag, if ticked, a message can be sent and saved into the pm_message table without any recipients being entered into pm_index.

This should allow us to have a pm_roles module with a pm_roles_index that allows for scalable mass user messaging. (and og can just duplicate the functionality if needed.)

Comments

litwol’s picture

consider the negative end of numbers.

NaheemSays’s picture

eh?

litwol’s picture

positive numbers could be user id, negative numbers could be role id.

Berdir’s picture

Hm. There is one big problem when sending messages to groups or roles. We cannot track who read/deleted a message anymore. Additionally, users will recieve old messages when they are assigned to a role. That should imho not happen. And they should be able to keep messages they recieved when they were in a specific group, had a role, whatever..

Actually, I think the best idea might be to use batch api to send messages to all users of the given role/group/whatever.

NaheemSays’s picture

Actually, I think the best idea might be to use batch api to send messages to all users of the given role/group/whatever.

The problem with this is that if you send a message to all users, or even a big subgroup of them on a very large site, for every message you could be adding 100,000+ records to the index. I am not a database guru and that may not be a problem, but such numbers scare me.

What I would think is if we do allow messages to groups/roles/other then we make it like a sort of "announcements" - not an actual conversation, but a single message that cannot be replied to, but even here if we want, we can have conversations.

The weak link for such a proposal has already been pointed out by Berdir - messages gained when a new member joins and messages lost when one is removed.

Alternatively we can simply say that this is not our problem space and if people want to contact all the members of a site, they should use some other method.

litwol’s picture

I've tackled a similar problem before. the way to solve it is to "buffer" your writes. for example:
Instead of this :

INSERT pm_index (col1, col2, col3...) VALUES(1,23,4....);
INSERT pm_index (col1, col2, col3...) VALUES(1,23,4....);
INSERT pm_index (col1, col2, col3...) VALUES(1,23,4....);
INSERT pm_index (col1, col2, col3...) VALUES(1,23,4....);

It would become like this:

INSERT pm_index (col1, col2, col3...) VALUES(1,23,4....), (1,23,4....),(1,23,4....), (1,23,4....);

A single query like this can write hundreds of thousands of records per second, while the individual writes are much slower.

Berdir’s picture

But it doesn't work on PostgreSQL...

Edit: Also, what nbz imho means is filling the index table with hundreds of thousends of rows and then selecting again. The inserting is not the issue, we can handle that with batch api.

litwol’s picture

Properly indexed tables can handle millions of records very easily. i fear i chose wrong schema when i broke up pm_* which results in too many joints which degrades performance.

is there no equivalent for pgsql that allows multiple inserts in single query?

oadaeh’s picture

Issue summary: View changes
Status: Active » Closed (won't fix)

This issue is being closed because it is against a branch for a version of Drupal that is no longer supported.
If you feel that this issue is still valid, feel free to re-open and update it (and any possible patch) to work with the 7.x-1.x branch (bug fixes only) or the 7.x-2.x branch.
Thank you.