王筝的博客
ruby学习

http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/?_ga=1.25950975.395536525.1415167309

 

(1)Import the public key used by package management system

The Ubuntu package management tools ensure package consistency and authenticity by requiring that

distributors sign packages with GPG keys. Issue the following command to import the MongoDB public

GPG key:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

(2)Create a list file for MongoDB

Create the /etc/apt/sources.list.d/mongdb-org-3.0.list list file using the following command:

echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list

(3)Reload local package database

Issue the following command to reload the local package database:

sudo apt-get update

(4)Install the MongoDB package

You can install either the lastest version of MongoDB or a specific version of MongoDB

You can install the latest stable version of MongoDB

sudo apt-get install -y mongodb-org

Or, install a specific release of MongoDB

 

sudo apt-get install -y mongodb-org=3.0.5 mongodb-org-server=3.0.5 mongodb-org-shell=3.0.5 mongodb-org-mongos=3.0.5 mongodb-org-tools=3.0.5

 

把一个分支中的修改整合到另一个分支的办法有两种:merge和rebase,

当开发进程分叉到两个不同的分支,又各自提交了更新。

最容易的整合分支的方法是merge, 它会把两个分支最新的快照以及两者的共同祖先进行三方合并,合并的结果是产生一个新的提交对象。

其实还有另外一个选择,可以在一个分支里发生的变化补丁在另一个分支重新打一遍,这种操作叫做衍合,

rebase的作用就是把在一个分支里提交的改变放到另一个分支里重放一遍。

$ git checkout experiment
$ git rebase master

它的原理是回到两个分支最近的共同祖先,根据当前分支(也就是要进行衍合的分支experiment)后续的历次提交对象,生成一系列文件补丁,然后以基线分支(也就是主干分支master)最后一个提交对象为新的出发点,逐个应用之前准备好的补丁文件,最后会生成一个新的合并提交对象,从而改写experiment的提交历史,使它成为master分支的直接下游。

 

衍合最后生成的快照,其实和普通的三方合并的快照内容一模一样。虽然最后整合得到的结果没有任何区别,

但是衍合能产生一个更为整洁的提交历史。如果观察一个衍合过的分支的历史记录,看起来会更清楚:仿佛所有修改都是在一跟线上先后进行的,尽管实际上他们原本是同时并行发生的。

一般我们使用衍合的目的,是想要得到一个能在远程分支上干净应用的补丁,比如某些项目你不是维护者,但是想帮点忙的话,最好使用衍合:先在自己的一个分支里进行开发,当准备向主项目提交补丁的时候,根据最新的origin/master进行一次衍合操作然后再提交,这样维护者就不需要做任何工作(实际上是把解决分支补丁同最新主干代码之间冲突的责任,转化为由提交补丁的人来解决),维护者只需要根据你提供的仓库地址进行一次快进合并,或者直接采纳你提交的补丁。

======== 衍合的风险=========

一旦分支的提交对象发布到公共仓库,就千万不要对该分支进行衍合操作。

进行衍合的时候,实际上抛弃了一些显存的提交对象而创造了一些类似但不同的新的提交对象,

如果你把原来分支中的提交对象发布出去,并且其他人更新下载后在其基础上开展工作,而稍后你又用git rebase

抛弃这些提交对象,把新的重演后的提交对象发布出去的话,你的合作者就不得不重新合并他们的工作,这样当你再次从他们那里获取内容的时候,提交历史就会变得一团糟。

https://git-scm.com/book/zh/v1/Git-%E5%88%86%E6%94%AF-%E5%88%86%E6%94%AF%E7%9A%84%E8%A1%8D%E5%90%88

 

/workspace/xxx-xx-cms:the-channel-sort$ git push origin the-channel-sort
To git@bitbucket.org:xxx-xxx-cms/xxx-xx-cms.git
! [rejected]        the-channel-sort -> the-channel-sort (non-fast-forward)
error: failed to push some refs to ‘git@bitbucket.org:xxx-xxx-cms/xxx-xx-cms.git
To prevent you from losing history, non-fast-forward updates were rejected
Merge the remote changes (e.g. ‘git pull’) before pushing again.  See the
‘Note about fast-forwards’ section of ‘git push –help’ for details.
/workspace/xxx-xx-cms:the-channel-sort$ git fetch
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 13 (delta 9), reused 0 (delta 0)
Unpacking objects: 100% (13/13), done.
From bitbucket.org:xxx-xxx-xxx/xxx-xx-cms

* [new branch]      production-deployment -> origin/production-deployment
* [new branch]      staging    -> origin/staging
/workspace/xxx-xx-cms:the-channel-sort$ git rebase origin/the-channel-sort
First, rewinding head to replay your work on top of it…
Applying: 重构代码
/workspace/xxx-xx-cms:the-channel-sort$ gitg
/workspace/xxx-xx-cms:the-channel-sort$ git push
Counting objects: 35, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (18/18), done.
Writing objects: 100% (18/18), 2.29 KiB, done.
Total 18 (delta 14), reused 0 (delta 0)
remote:
remote: View pull request for the-channel-sort => master:
remote:   https://bitbucket.org/xxx-xxx-cms/xxx-xx-cms/pull-requests/21?t=1
remote:
To git@bitbucket.org:xxx-xxx-cms/xxx-xx-cms.git
841dda2..95cdf0d  the-channel-sort -> the-channel-sort

Capistrano 2 首次部署流程

  1. 修改 config/deploy.rb 和 config/deploy/production.rb
  2. bundle exec cap production deploy:setup
  3. bundle exec cap production deploy:check
  4. bundle exec cap production deploy:cold

配置 Nginx

  1. ln -s /opt/app/ruby/aaa-cms/current/config/nginx.conf /etc/nginx/conf.d/ott_tv_cms.conf
  2. 添加 “include conf.d/aaa_cms.conf;” 到 /etc/nginx/nginx.conf
  3. 运行 nginx -t 来检查 nginx 配置没有问题
  4. 运行 nginx -s reload 来重启 nginx

初始话数据库数据

RAILS_ENV=production bundle exec bin/rake db:seed

Capistrano 2 首次部署完成后再部署

bundle exec cap production deploy

cat ~/.ssh/id_rsa.pub  复制内容

ssh 10.10x.xx.xx

su – webuser

vim ~/.ssh/authorized_keys 粘贴进来

http://guides.rubyonrails.org/configuring.html#custom-configuration

config/application.rb

复制代码
     # The default locale is :en and all translations from config/locales/*.rb,yml are auto loaded.
     config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}').to_s]
     config.i18n.default_locale = :"zh-CN"
+
+    # Rails 自定义配置
+    # http://guides.rubyonrails.org/configuring.html#custom-configuration +    config.x.redis.host = '127.0.0.1'
+    config.x.redis.port = 6379
   end
 end
复制代码

config/environments/production.rb

   # Use default logging formatter so that PID and timestamp are not suppressed.
   config.log_formatter = ::Logger::Formatter.new
+
+  config.x.redis.host = '10.103.xx.xx'
+  config.x.redis.port = 6379
 end

 config/environments/staging.rb

复制代码
 # Based on production defaults
 require Rails.root.join("config/environments/production")
+
+Rails.application.configure do
+  config.x.redis.host = '10.103.xx.xx'
+  config.x.redis.port = 6379
+end
复制代码

config/initializers/redis.rb

+redis_config = Rails.configuration.x.redis
+Redis.current = Redis.new(:host => redis_config.host, :port => redis_config.port)

 

 

need_fetch_channel.each do |channel|
    yesterday = (Time.now - 86000).to_i
    all_channel_videos = channel.videos.asc(:begin_time)
    all_channel_videos.each do |v| 
      if v.end_time < yesterday
        v.destroy
      end 
    end 
end

重构完

need_fetch_channel.videos.where(:end_time.lt => DateTime.yesterday.midnight).destroy_all

DateTime.yesterday.midnight 获取的是昨天的凌晨零点的时间

 

 

复制代码
    def set_redis_data(channel_id,channel)
      redis = Redis.new(:host => '10.xxx.xx.xx', :port => 6379)
      redis.del(channel_id)
      yesterday = (Time.now - 36000).to_i
      all_channel_videos = channel.videos.not_deleted.where(:begin_time.gt => yesterday).desc(:begin_time)
      all_channel_videos.each do |v|
        video_hash = {}
        video_hash[:showid] = v.showid
        video_hash[:showname] = v.showname
        video_hash[:begin_time] = v.begin_time
        video_hash[:end_time] = v.end_time
        video_hash[:vid] = v.vid
        video_hash[:thumbhd] = v.thumbhd
        video_hash[:channel_id] = v.channel_id
        video_hash[:title] = v.title
        redis.lpush channel_id, video_hash.to_json
      end
    end
复制代码

在video_controller的方法

复制代码
def set_redis_data(channel)
  cache_key  = "channel_#{channel.channel_id}"
  video_list = Redis::List.new("channel_#{channel.channel_id}")

  needed_attributes = %w(show_id showname begin_time end_time vid thumbhd channel_id title)
  videos = channel.videos.active.where(:begin_time.gt => 10.hours.ago.to_i).only(needed_attributes).asc(:begin_time)
  videos_json = videos.map { |video| video.to_json }

  video_list.clear
  video_list.push *videos_json
end
复制代码

在model的channel.rb里

  def set_redis
    video_list = Redis::List.new("channel_#{channel_id}")

    needed_attributes = %w(showid duration showname begin_time end_time vid thumbhd channel_id title)
    all_videos = videos.active.where(:begin_time.gt => 10.hours.ago.to_i).only(needed_attributes).asc(:begin_time)
    video_list.clear

    unless all_videos.empty?
      videos_json = all_videos.map { |video| video.to_json }
      video_list.push *videos_json
    end
  end

 


复制代码

 

复制代码
    next_played_videos = @channel.videos.active.where(:begin_time.gt => begin_time).asc(:begin_time) 
    next_played_videos.each do |video|
      video.begin_time = begin_time   
      video.end_time = video.begin_time.to_i + video.duration
      begin_time = video.end_time     
      video.save
    end
复制代码

这里使用 each 操作遍历效率比较低,可以使用 Mongoid 的 inc 方法批量对指定的字段值进行增减,文档见 http://mongoid.org/en/mongoid/docs/persistence.html。

next_played_videos = @channel.videos.active.where(:begin_time.gt => begin_time).asc(:begin_time)
next_played_videos.inc(begin_at: -@video.duration, end_at: -@video.duration)

 

url_hash.sort_by {|k| k[“title”]}

http://stackoverflow.com/questions/3154111/how-do-i-sort-an-array-of-hashes-by-a-value-in-the-hash