Splitting Lovemachine Tenants

The Lovemachine architecture has the following provisions to survive rapid growth:

Architecture
The primary wildcard ELB (*.sendlove.us) is currently: wildcard-sendlove-us-839460415.us-east-1.elb.amazonaws.com Each front-end instance can belong to at most one balancer group.
 * ELB Load Balancers - There is a one-to-many relationship between Amazon ELB load balancers. One balancer(group?) is designated by a canonical name.


 * Front end apache - Each front-end belongs to one 'cluster', which ultimately is defined by it's configuration database endpoint (cupid). Each front-end are intended as being dumb clones with only the cupid location and log results different.  Any machine can answer any requests for tenants contained within the defined cupid database (as long is it is able to reach the tenant database as well).


 * Database server - Each database cluster (master and slaves) contain the information for one or more tenants. The database servers function in a Master-Slave-Slave chain. We promote a slave to master. If we have to replace a slave, we restore the new slave from backup and play binlogs or bring the new slave to a window that it can catch up with the master. Once the slave is synced, the master is demoted.

Volume Gating Controls
The exposure here is repeated requests to bad email addresses, no further action occurs until the confirmation link is successfully returned. It is possible to add rate-limiting here, however due to the nature of DOS, rate-limit details for this interface belong in a memory based datastore and should avoid database interaction.
 * Trial - Trial begins the registration process by sending a confirmation token to the requested address. Clicking on this token creates a new tenant request in the trial database.

The simplest method of gating tenant creation is to set limiters on when Tewari runs or filtering the selection of new accounts Tewari processes.
 * Tewari - Tewari is the script responsible for provisioning new tenants and performing initial tests on the instance. Tewari currently runs from cron and picks up a list of pending new instances, marks a new instance as in-progress (so that multiple running creation scripts do not have contention issues), creates the database user, creates tables and view, generates API keys and stores the configuration information in cupid. The system then connects to the new tenant over the API to send love to the administrator which tests the api, database, configuration and email capabilities of the new tenant.


 * Import - we have a cap of 250 users / csv file as a gating mechanism for spam. We can tune this setting or add a maximum users cap for default/new accounts.


 * Mail usage. more (most?) of the records (user tables, love tables) contain a status column that we can feed to mange workflows (like taking email creation out of band). If we need to take email out of band, this column will be used to communicate to message queue processors.


 * RateLimit. we have a rate limit function that uses a time decay method. Mulitple queues (per user, per action etc) can be created on demand. Adding a rate limit is a function of time per activity. if we want to allow 10 love per minute for each user, we add 6 seconds of decay for each message sent. If at any time, the time threshold (multiple time queues can be created as in 6/minute and 30/hour) is greater than allowed, the rate limit returns an integer which is the number of seconds before the next action can be performed.

Moving a Tenant
Moving a tenant to a sub-cluster for load balancing/allocation requires the following activities:

1. New ELB endpoint is needed to change load on the front-ends, to balance database servers only, just change the SERVER in the tenants cupid config table. 2. Halt updates to tenant, make a dump of the tenants database (PREFIX_TENANTID) 3. Make a dump of the cupid configuration data, change the API keys to prevent cross-talk during migration, create new account information and update the DB_SERVER to reflect the new database location 4. Create the tenants database credentials on the new database server 5. Import the archived data from the prior tenant database 6. Import the cupid data for the tenant as it will run on this cluster 7. Use /etc/hosts or proxy override to test tenant on new cluster 8. Disable configuration for tenant on prior cupid configuration 9. Change DNS for 'tenant'.sendlove.us to point to the new ELB balancer