Tuesday, 15 July 2014

caching - 2 instances of Redis: as a cache and as a persistent datastore -



caching - 2 instances of Redis: as a cache and as a persistent datastore -

i want setup 2 instances of redis because have different requirements info want store in redis. while not mind losing info used primarly cached data, want avoid lose info in cases when utilize python rq stores redis jobs execute.

i mentionned below main settings accomplish such goal.

what think?

did forget important?

1) redis cache

# snapshotting not rebuild whole cache if has restart # reasonable not decrease performances save 900 1 save 300 10 save 60 10000 # define max memory , remove less used keys maxmemory x # define according needs maxmemory-policy allkeys-lru maxmemory-samples 5 # rdb file name dbfilename dump.rdb # working directory. dir ./ # create sure appendonly disabled appendonly no

2) redis persistent datastore

# disable snapshotting since save each request, see appendonly save "" # no limit in memory # how disable it? not defining in config file? maxmemory # enable appendonly appendonly yes appendfilename redis-aof.aof appendfsync # save on each request not lose info no-appendfsync-on-rewrite no # rewrite aol file, take min size based on approximate size of db? auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 32mb aof-rewrite-incremental-fsync yes aof-load-truncated yes

sources:

http://redis.io/topics/persistence https://raw.githubusercontent.com/antirez/redis/2.8/redis.conf http://fr.slideshare.net/eugef/redis-persistence-in-practice-1 http://oldblog.antirez.com/post/redis-persistence-demystified.html how perform persistence store in redis? https://www.packtpub.com/books/content/implementing-persistence-redis-intermediate

i think persistence options aggressive - depends on nature , volume of data.

for cache, using rdb idea, maintain in mind depending on volume of data, dumping content of memory on disk has cost. on system, redis can write memory info @ 400 mb/s, note info may (or may not) compressed, may (or may not) using dense info structures, mileage vary. settings, cache supporting heavy writing generate dump every minute. have check volume have, dump duration below min (something 6-10 seconds fine). actually, recommend maintain save 900 1 , remove other save lines. , dump every 15 min considered frequent, if have ssd hardware progressively wear out.

for persistent store, need define dir parameter (since controls location of aof file). appendfsync alternative overkill , slow purposes, except if have low throughput. should set everysec. if cannot afford lose single bit of info in case of scheme crash, using redis storage backend not idea. finally, have adjust auto-aof-rewrite-percentage , auto-aof-rewrite-min-size level of write throughput redis instance has sustain.

caching redis

No comments:

Post a Comment