Create your own awesome maps

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account?
Log In

Chatty - Case Study, Edan Maor & Ofir Ovadia by Mind Map: Chatty - Case Study, Edan Maor & Ofir Ovadia
0.0 stars - reviews range from 0 to 5

Chatty - Case Study, Edan Maor & Ofir Ovadia

about

Chatty is a Group Chat for every site

chat with others current guests to the site

was used in #J14 home page, http://j14.org.il/

also in many other sites

agenda

review production problems they had

open discussion on servers maintenance & scaling

Stack

Client

JS/HTML

Server

Django

Redis, for faster operations, to lower the load on the database, on the poll pings

MySQL

Client-Server interaction

download script

initial sync

get current members

ping

every 2 sec

get new messages

send message

Architecture

Old

1 box in Slicehost, apache + mod_wsgi instances, redis, MySQL

then took larger machine, every time it couldn't handle the load, several times

Till 1 machine couldn't handle the load

Then paniced & understood they need to change architecture

New

topology, Rackspace load balancer, service, Many web server boxes, NGINX & gunicorn processes, gunicorn is much more predictable in RAM usage, unlike Apache, gunicorn process per core, NGINX feeds responses to clients, at their own speed, Very easy to work with Gunicorn & Django, much easier than Apache, 1 box for MySQL & Redis

very easy to add nodes

handles load well

Open discussion

Better server architecture

what simple things can improve this architecture?

Suggestions, There's a problem because the Polling architecture, Maybe Django isn't the right choice, need something like Comet, Gunicorn have async handlers, Can optimize with etags, to stop the request handling in the NGINX level, because it will know that there're no updates, Use physical & not virtual machine for DB server, for faster IO, do top & check on load, or use SAR, Use Pure Django over AppEngine, use the Channel API for push, Use Socket.io for push

Meta remark, first understand your bottleneck, know what should be attacked

Profiling

Suggested tools, Nagios, Zabix

Benchmarking