天天看點

Design複習公司核心技術複習

FLAG‎ > ‎

Design複習

  • 如何秒掉99%的海量資料題!!! http://blog.csdn.net/v_july_v/article/details/7382693
  • 十道海量資料題: http://blog.csdn.net/v_july_v/article/details/6279498
    • 總而言之就是hash function寫到小檔案,然後再處理,然後再歸并
    • 注意
      • bloom filter
        • k個hash,映射到bit vector
        • 快速,節約,但有false positive且不能進行删除操作
      • hashing
      • bit-map
      • heap
      • external sort
      • trie
      • mapreduce
  • Quora關于系統設計題 http://www.quora.com/Job-Interviews/How-should-I-prepare-system-design-questions-for-Google-Facebook-Interview
  • 很有見地的一系列博文!!link
    • 重點看GFT三家的解剖!!!
    • “做大的藝術”也有對分布式的讨論
    • 緩存人氣作者
  • 非常好的總結!!!link
  • facebook design 總結!!其實也是一個非常非常好的general總結 link
    • Memcache (不是memcached,那個開源的記憶體cache)
    • Scalable Web Architecture and Distributed Systems
      • 對于一個像flickr一樣的網站,更重要的是requesting time而不是上傳時間。是以優化隻要集中在讀的一側而不是上傳的一側
  • flickr架構 link
  • Pinterest架構 link
  • 買買提上關于除了刷題的建議link
  • 看一下段位址的完全實作過程
  • 解剖Twitter http://blog.sina.com.cn/s/blog_46d0a3930100fai8.html
  • Google Calendar的實作過程
  • Google的系統設計題:http://www.mitbbs.com/article_t/JobHunting/32562595.html
    • 如果實作像facebook一樣的activity broadcast.
    • 一個朋友f1更新一條資訊後,他的所有朋友都要看到這條資訊。
    • 條件:
    • 1. 朋友數量極大。
    • 2. 有的朋友線上,有的不線上,不線上的登陸後要能看到。
    • 3. 如果儲存空間有限,如何處理。
  • 如何開發一個聊天室 http://book.mixu.net/node/ch3.html
  • Terasort link
    • 普通歸并排序的瓶頸在于合并是是sequential的
    • Terasort保證mapper處理的各個塊,i的所有元素都比i + 1的大,這樣就可以自然的歸并了

Keywords for Design

  • sharding (partition)
  • 非常好地區分sharding和replication!link
  • load balancer
  • master-slave replication
  • consistent hashign
  • bloom filter
  • Kafka vs RabbitMQ
    • 兩個都是消息隊列,message broker
  • CDN
  • peak throughput

Hired in Tech - System Design link

  • Process
    • Step 1 - Constraints and Use Cases (5 min!)
      • Very first thing to do when faced with the problem: clarify the constraints and use case.
      • Don't panic about the correctness, write some numbers down!
      • Write on board: the use cases like
        • (Basic features)
        1. take a url => return a shorter one (shortening)
        2. take a short => return the original (redirection)
        3. (ask some more augmented features?) 
        4. customized url?
        5. analytics?
        6. automatic link expiration?
        7. UI? API?
      • Write on board, constraints. Traffic and Data
        1. scale down to seconds! Estimate the traffic and data
        2. 100M new urls per month
        3. What can be cached?? 10% from shortening and 90% from redirection
        4. how long is url?
      • 都是先算幾年或者幾個月有多少,然後scale到秒!
      • Ready heavy? Write heavy?
    • Step 2 - Abstract Design
      • draw sketch,不要管細節!先把基礎功能實作!
      • Application service layer 和 Data Storage Layer
    • Step 3 - Understand Bottlenecks
      • Need a load balancer? Data is so huge that need to distribute the database?
      • Talk about the tradeoff!!!
      • 這一步要結合1和2,比如1中的data per sec,就可以用來估算資料庫是否有效?或者如果每次都要搜遍資料庫的話,會不會效率過低?Bottlenecks identified!!!
    • Step 4 - Actual Design of a Scalable System!
      • (next section)
      • whenever get challenged by interviewer about the architectural choices, acknowledge that rarely an idea is perfect, and outline the advantages and disadvantages of your choice
      • add load balancer is not only for amortize traffic, but also to deal with spiky traffic!! and shut them down when the traffic goes back to normal. 
      • Don't forget to mention we need benchmark, profiling, and load test the architecture
  • Fundamentals
    • (Scalability @Harvard)
    • (Scalability for Dummies)
    • (Database sharding)
    • and don't forget about the monitor
    • sometimes there will be a flash traffic when many people want to watch/read the same thing
  • Examples
    • (highscalability blog)
  • Wrap-up
  • Practice

Scalability @Havard

  • Vertical scaling
    • CPU cores, RAM, disks, etc.
    • why not a full solution? Real-world constraints...
    • Updating drive - updating IO of databases
    • RAID
  • Horizontal scaling
    • Use more cheaper hardware, multiple servers
    • problem of CONSISTENCY, SYNCHRONIZATION
    • 用load balancer回複用戶端請求,即暴露一個公共IP給外部;而所有伺服器都用的内部IP,客戶請求都是由load balancer來配置設定! 
  • Load balancing
    • Round-robin請求 to 各個伺服器
    • Implement load balancer:
      • Software: HAProxy etc.
      • Hardware: 100K for decent one...
    • Use multiple load balancers to avoid single node failure at the load balancer
    • (from HiredInTech) add load balancer is not only for amortize traffic, but also to deal with spiky traffic!! and shut them down when the traffic goes back to normal. Also, when one server is down, use others (reliability)
  • Caching
    • .html - redundancy, no template, must write <header> ... lots of find and replace 
    • MySQL query cache
    • memcached 
      • memory cache, which is also a server
      • if read-heavy, then this is very efficient!
  • DB replication
    • master-slave, benefit for read-heavy websites
    • master-master, 
    • load balancer can also generate a single-node failure
    • use heat beat to detect activity
  • DB partitioning
    • different multiple servers: mit.thefacebook.com, harvard.thefacebook.com
    • A-H to server1 I-Z to server 2, etc.
  • 最後還有一段執行個體,很好~
    • Sync between data center?
      • load balancing on DNS level!

Scalability for Dummies link

  • part 1, replicas
  • part 2, database can become the bottleneck. Switch to NoSQL.
  • part 3, caching - in-memory caching like Redis, not file-based caching!
    • cache queries (but hard to remove)
    • cache objects, makes asynchronous processing possible
  • part 4, asynchronism
    • async 1: cache the dynamic content, pre-rendered HTML
    • async 2: (computing-intensive tasks) pass to backend, constantly check if the job is done

Database Sharding

  • 一種data partition的方式,主要用的不是replica,而是每個廉價伺服器服務一部分使用者
    • high availability
    • faster queries
    • more write bandwidth (parallel writing!)
    • no replica (???)

系統設計面試題思路彙總 link

  • 基本可以用Google三駕馬車解決:MapReduce, GFS, BigTable
  • 三駕馬車之前,常用架構是:database + sharding + cache
  • 虛拟節點技術
    • 虛拟節點技術:該技術常用于分布式資料分片中。具體應用場景是:有一大坨資料(maybe TB級或者PB級),我們需按照某個字段(key)分片存儲到幾十(或者更多)台機器上,同時想盡量負載均衡且容易擴充。傳統的做法是:Hash(key) mod N,這種方法最大缺點是不容易擴充,即:增加或者減少機器均會導緻資料全部重分布,代價忒大。于是乎,新技術誕生了,其中一種是上面提到的DHT,現在已經被很多大型系統采用,還有一種是對“Hash(key) mod N”的改進:假設我們要将資料分不到20台機器上,傳統做法是hash(key) mod 20,而改進後,N取值要遠大于20,比如是20000000,然後我們采用額外一張表記錄每個節點存儲的key的模值,比如:node1:0~1000000;node2:1000001~2000000。。。。。。這樣,當添加一個新的節點時,隻需将每個節點上部分資料移動給新節點,同時修改一下這個表即可
  • Q:設計一個系統,使寫速度提高 -> BigTable, 并發寫到一個cache裡面(随機寫),然後cache滿了在寫到磁盤(順序寫,快!)

High Scalability

  • YouTube architecture
    • Most popular content is moved to CDN
  • 8個重要的scalable design pattern link
  • What is your scalability plan link,
    • 寫的不錯!是一步一步來的
  • 咖啡店的比喻,不錯!link

公司核心技術複習

  • 必看論文
    • Google: Google File System, MapReduce, Bigtable
      • Bigtable
        • 類似于一個分布式kvs
        • key是<行,列,timestamp>
        • tablet是最小機關
    • Facebook: Cassandra
    • Amazon: Dynamo
  • 注意distributed data store
    • Non-relational, therefore quick

Google

  • Web crawling and indexes link
    • the crawler should be distributed
    • robots.txt
    • DNS resolution is a well-known bottleneck in web crawling
    • BFS的時候用優先隊列,權值為key
  • DESIGN
    • 大合集!!!
    • 分布式crawler link
    • autocomplete
      • trie
      • HMM?
      • popularity
    • 讀寫鎖
      • read write lock, 又叫share lock和exclusive lock
      • lock()和trylock()的差別:一個是blocked一個不block
      • 這篇寫得不錯 link
      • 兩道面試題 link link
      • 注意writer starvation
      • producer comsumer link
      • mutex和cv實作semaphore link
    • distributed kv storage (DHT B+ tree)
    • distributed lock
      • 其實還是基于Paxos
    • distributed hash 
      • 分布式哈希基本介紹 link link
      • 一緻性哈希 link
        • 普通線性hash缺點就在于如果移除、添加機器,需要有大量資料遷移!
      • GFS是基于metadata的實作!看!!->單點故障->一緻性哈希
  • 這篇high scalability link
  • how to implement online booking link
  • 購物車 link
    • cookie和session的實作,以及與資料庫的聯系,講得不錯!

Facebook

  • Zuck's law
  • 這篇文章!!!!http://www.mitbbs.com/article_t/JobHunting/32741713.html
  • Hiphop compiler
Facebook Stats
  • Design複習公司核心技術複習
Facebook Infrastructure
Facebook feeds 
  • http://readme.skplanet.com/wp-content/uploads/2012/11/0-3_Facebook1.pdf 講得很爽
  • 系統的幾個文章
    • 關于feeds的general discuss!!!!link
    • SO上的 link
    • 這裡面的幾個連結都非常好!!!
      • push model for people who don't have many followers
      • pull model for the other way around
Tech Talks
  • "Facebook is not about actions, but interactions"

Sign in|Report Abuse|Print Page|Powered By Google Sites