创建用户,并授予sudo权限

1
2
useradd -d /home/robot/ -G wheel robot
passwd robot #

安装docker

1
2
3
4
5
6
7
8
9
su - robot
sudo curl -sS https://get.docker.com/ | sh # 安装的是docker-ce
# 安装完后会提示将当前用户加入到`docker`用户组
sudo usermod -aG docker robot

#将用户加入`docker`用户组将取得运行容器的权限,运行容器需要root权限,因此这意味着该用户可以获得root权限。

#允许docker开机启动
sudo systemctl enable docker

安装docker-compose

docker-compose 让你可以用.yml/.yaml文件来部署docker容器和应用,当前最新的版本是1.2.0,你可以通过这里来查看最新的版本

1
2
sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

部署Rails app

创建rails

1
2
3
4
5
6
7
8
9
10
# 创建rails 应用。可以直接在自己的机器上用rails new创建
rails new docker_demo --api --skip-bundle

# 如果没有安装ruby环境,也可以用docker的ruby镜像跑一个docker容器,在容器里面创建rails app
docker pull ruby:2.5
docker run --rm -it -v "$PWD:/app" ruby:2.5 bash # --rm 退出后删除容器

# 在docker容器中运行
gem install rails # 安装rails
cd /app && rails new docker_demo --api --skip-bundle # 新建rails app

dockerfile & docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# dockerfile

FROM ruby:2.5

WORKDIR /var/www/app
COPY Gemfile* ./
RUN gem install bundler && bundle install
COPY . .

EXPOSE 3000

CMD rails s -b 0.0.0.0

# docker-compose.yml
version: '1'

services:
docker_demo:
build: .
ports:
- "3000:3000"

启动服务

1
docker-compose up -d docker-demo

Troubleshooting

启动docker container 导致主机ssh中断,并且只能通过vultr提供的网页console ssh上去.

serverSpeeder服务造成的, 卸载’serverSpeeder’, 安装bbr

应用场景

调用阿里录音文件识别API时,需要在header中添加校验字符串,参考官网的node示例(传送门),以下给出ruby生成校验字符串的方法.

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
require("net/http")
require('base64')
require('digest')
require('json')
require('openssl')

app_id = 'id'
app_secret = "secret"
params = {"app_key":"nls-service-telephone8khz","enable_callback":false,"oss_link":"http://aliyun-nls.oss.aliyuncs.com/asr/fileASR/examples/nls-sample.wav"}

now = "Tue, 19 Dec 2017 07:43:40 GMT"
# 使用GMT格式时间
# now = Time.now.gmtime.strftime("%a, %d %b %Y %T GMT")

headers = {
method: 'POST',
accept: 'application/json',
'content-type': 'application/json',
date: "#{now}"
}

## step 1. 组stringToSign,注意这里不加urlpath
string2sign = "POST" + "\n" + headers[:accept] + "\n" +
Digest::MD5.base64digest(params.to_json) + "\n" +
headers[:"content-type"] + "\n" +
headers[:date]
puts string2sign ## => POST
## application/json
## 0yo5NmJ4dReSHOxlItYpvA==
## application/json
## Tue, 19 Dec 2017 07:43:40 GMT


## step 2. 加密[Signature = Base64( HMAC-SHA1( AccessSecret, UTF-8-Encoding-Of(StringToSign) ) )]
digest = OpenSSL::Digest.new('sha1')
signature = Base64.encode64(OpenSSL::HMAC.digest(digest, app_secret, string2sign))

## step 3. 组authorization header [Authorization = Dataplus AccessKeyId + ":" + Signature]
auth_header = "Dataplus " + app_id + ":" + signature
puts auth_header ## => Dataplus id:oqoyn3Y1/cPpK2DfU5jy2DbaKws=

uri = URI("https://nlsapi.aliyun.com/transcriptions")
response = Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == 'https') do |http|
request = Net::HTTP::Post.new(uri, {'Accept' => 'application/json',
'Content-type' => 'application/json',
'Date' => "#{now}",
'Authorization' => "#{auth_header}"})
request.body = params.to_json
http.request request
end

puts response.body

Step 1. Register a new OAuth application

传送门: Goooooooooooooooooooo

示例:

1
2
3
4
5
6
7
8
9
10
11
Application name
blog

Homepage URL
https://blog.lianming.tk

Application description
lianming's blog.

Authorization callback URL
https://blog.lianming.tk

Step 2. Config

主题的配置中添加:

1
2
3
4
5
6
7
8
9
# themes/jacman/_config.yml
gitalk:
enable: true
clientID: ''
clientSecret: ''
repo: 'blog.lianming.tk' #写一个自己的github repo名称,比如我直接用自己的gitpage repo
owner: 'icecoll' #自己的github user name
admin: ['icecoll']
distractionFreeMode: 'true

Step 3. Add gitalk code

把gitalk相关的代码加入到页面文件中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# themes/jacman/layout/_partial/post/comment.ejs
<% if (theme.gitalk && theme.gitalk.enable){ %>
<section id="gitalk-container" class="comment"></section>
<link rel="stylesheet" href="https://unpkg.com/gitalk/dist/gitalk.css">
<script src="https://unpkg.com/gitalk/dist/gitalk.min.js"></script>
<script>
var gitalk = new Gitalk({
clientID: '<%= theme.gitalk.clientID %>',
clientSecret: '<%= theme.gitalk.clientSecret %>',
repo: '<%= theme.gitalk.repo %>',
id: window.location.pathname,
owner: '<%= theme.gitalk.owner %>',
admin: '<%= theme.gitalk.admin %>',
distractionFreeMode: '<%= theme.gitalk.distractionFreeMode %>',
});
gitalk.render('gitalk-container');
</script>
<% } %>

应用场景

在一张宣传图片上加上二维码和自己的微信头像, 用到mini_magick这个gem即可.

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
require "mini_magick"
require 'net/http'
require 'tempfile'

def composite_image(qrcode_ticket="gQEn8DwAAAAAAAAAAS5odHRwOi8vd2VpeGluLnFxLmNvbS9xLzAyd2RybG95cTNma2wxMDAwME0wMy0AAgRK9RhbAwQAAAAA")
qrcode_uri = URI("https://mp.weixin.qq.com/cgi-bin/showqrcode?ticket=#{qrcode_ticket}")
res = Net::HTTP.get_response(qrcode_uri)
raise "get qrcode faild error " unless res.is_a?(Net::HTTPSuccess)

tmpfile = Tempfile.new(qrcode_ticket)
open(tmpfile.path, "wb") { |file| file.write(res.body) }

profile_uri = URI("http://thirdwx.qlogo.cn/mmopen/8StU3PHCdLkPhx9kGZM1AYf9Ou2kncJb1RCCYc3DGoBoapgtdqrSDAKWIq7oNUcekicmxfDoLok5Spicf9uG4G5ZwpkkuKoXRw/132")
profile_res = Net::HTTP.get_response(profile_uri)
raise "get qrcode faild error " unless profile_res.is_a?(Net::HTTPSuccess)

profile_image = Tempfile.new(["profile", '.jpg'])
open(profile_image.path, "wb") { |file| file.write(profile_res.body) }

MiniMagick::Tool::Convert.new do |c|
c << "monkey100.jpg"
c.merge! [tmpfile.path, "-geometry", "205x205+433+955", "-composite"] # -geometry 选项指定大小和位置,这里只是设置了位置。
c.merge! [profile_image.path, "-geometry", "95x95+10+10", "-composite"]
c.merge! ["-pointsize", "26", "-font", "./simfang.ttf", "-fill", "black", "-draw", "text 120,60 '安'"]
c << "out.jpg"
end

end

composite_image()

git demo

传送门: Goooooo

应用场景

在做一个在线教育项目的时候,有一个需求是要求对页面进行录制,有很多在线教育或者视频直播服务提供商基本都是在服务器端录制的,
有一些支持客户端录制的基本也是采用的BS架构,很少有直接在网页进行的,应该是网页录制不稳定因素比较多,比如说对客户端硬件的
要求,页面刷新带来的影响等等.但我还是做了一些尝试.

方案

前面三种方案都用到了WebRTC,试了一下Electron提供的desktopCapture,发现用这种方法是无法获取系统输出设备(耳机,扬声器)声音的,参考github上这个issue.也就是用WebRTC的getUserMedia方法都会有这个毛病.但是有些chrome的插件,像RecordRTCScreencastify这些是可以录制耳机声音的,应该是做了一些扩展(RecordRTC是开源的,有空可以看看).
ffmpeg提供了捕获桌面的方法,试了一下windows下,可以捕获指定的窗口,效果还可以,就是CPU占用率有点高.

相关代码

  • Electron desktopCapture
    用这种方法的时候,如果只录制单独窗口,无法录制声音(参考)

    To capture video from a source provided by desktopCapturer the constraints passed to navigator.mediaDevices.getUserMedia must include chromeMediaSource: ‘desktop’, and audio: false.

这样的话如果要获取页面上的video/audio元素的音频,可以尝试用captureStream方法获取元素的流,然后从中提取音轨,再用Web Audio Api将其混合.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
function startRecord() {
electron.desktopCapturer.getSources({types: ['window', 'screen']}, (error, sources) => {
if (error) throw error
for (let i = 0; i < sources.length; ++i) {
if (sources[i].name === "foo") {
navigator.mediaDevices.getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sources[i].id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}).then((stream) => handleStream(stream))
.catch((e) => handleError(e))
return
}
}
});
}

function handleStream(stream) {
//需要electron desktopCapture录制指定窗口的话,无法录制声音,所以再次调用getUserMedia获取声音
navigator.mediaDevices.getUserMedia({audio: true, video: false}).then(function(mediaStream){
var audioTracks = mediaStream.getAudioTracks();
//add video and audio sound
// 获取页面元素video和audio
var medias = $("audio,video");
for (var i = 0; i < medias.length; i++) {
//这里需要在创建BrowserWindow对象的时候将experimentalFeatures设置为true,否则无法调用captureStream
var tmpStream = medias[i].captureStream(); // mainWindow = new BrowserWindow({webPreferences: {experimentalFeatures: true} })
if(tmpStream) {
//获取音轨
var tmpTrack = tmpStream.getAudioTracks()[0];
audioTracks.push(tmpTrack);
}
}

// mix audio tracks
//将音轨加入stream
if(audioTracks.length > 0){
var mixAudioTrack = mixTracks(audioTracks);
stream.addTrack(mixAudioTrack);
}

stream.addTrack(audioTrack);
recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(event) {
// deal with your stream
};
recorder.start(1000);
}).catch(function(err) {
//console.log("handle stream error");
})
}

//用Web Audio Api合成将音轨混合,因为如果有多个音轨的话,最终录制的视频中默认只取第一条.
function mixTracks(tracks) {
var ac = new AudioContext();
var dest = ac.createMediaStreamDestination();
for(var i=0;i<tracks.length;i++) {
const source = ac.createMediaStreamSource(new MediaStream([tracks[i]]));
source.connect(dest);
}
}
return dest.stream.getTracks()[0];
}

  • FFMPEG
    这种方法的好处是可以结合electron的线程机制(ipcMain/ipcRenderer),使录制在单独的线程中进行,从而不受网页刷新的影响.
    ffmpeg在不同操作系统中使用的命令不一样,以下是windows的尝试.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    const electron = require('electron')
    const app = electron.app

    const ffmpegPath = require('ffmpeg-static');
    const ffdevices = require('ffdevices');
    const child_process = require('child_process');

    const fs = require("fs");
    const path = require('path')
    const userDataPath = app.getPath("userData");
    const recordPath = path.join(userDataPath, 'records');

    console.log(ffmpegPath.path);
    ffdevices.ffmpegPath = ffmpegPath.path;
    ffdevices.gdigrab = false;
    AppCapture = function() {
    var ctx = this;

    this.captureProcess = null;
    this.fileName = null
    this.isCapturing = false;

    this.setFile = function(fileName) {
    this.fileName = fileName;
    }

    this.startCapture = function() {
    console.log('enter start capture');
    ffdevices.getAll(function(error, devices) {
    if(!error) {
    var args = [];
    var audioCount = 0;

    //添加音频设备
    for(var i=0;i<devices.length;i++) {
    if(devices[i].type == 'audio' && devices[i].deviceType == 'dshow'){

    audioCount = audioCount + 1;
    var audioArgs = ['-f', 'dshow', '-i', `audio=${devices[i].name}`];
    args = args.concat(audioArgs);
    }
    }
    //创建保存路径
    if (!fs.existsSync(recordPath)){
    fs.mkdirSync(recordPath);
    }

    //这里-q:v -q:a表示视频的质量,值越底,质量越高,详见ffmpeg文档
    var fullPathName = path.join(recordPath, ctx.fileName);
    var videoArgs = [
    '-y',
    '-f', 'gdigrab',
    '-i', 'title=monkey100',
    '-framerate', '100',
    '-vf', "fps=30",
    '-video_size', '720x480',
    '-q:v', '10',
    '-q:a', '100',
    '-draw_mouse', '1',
    '-t', '00:20:00', //max duration 20 miunites
    fullPathName
    ];
    args = args.concat(videoArgs);

    if(audioCount > 1) {
    var filter_complex_arg = '';
    for(var j=0;j<audioCount;j++) {
    filter_complex_arg += `[${j}:a]`;
    }
    filter_complex_arg += ` amerge=inputs=${audioCount}`;
    args = args.concat([
    '-filter_complex', filter_complex_arg,
    //"-c:a", "pcm_s16le"
    '-map', `${audioCount}`,
    '-map', '[a]'
    ]);
    }

    console.log('start recording');
    ctx.captureProcess = child_process.spawn(ffmpegPath.path, args);
    ctx.isCapturing = true;
    console.log('recording started');

    ctx.captureProcess.stderr.on('data', (data) => {
    console.log(`error: ${data}`);
    });

    // 15 minutes
    setTimeout(function() {
    ctx.stopCapture();
    }, 1000 * 60 * 15);

    } else {
    console.log(`get devices error: ${error}`);
    }
    });
    }

    this.stopCapture = function() {
    console.log('stoping');
    if(this.isCapturing && this.captureProcess){
    //停止录制
    this.captureProcess.stdin.write('q');
    this.isCapturing = false;
    }
    }

    }

    module.exports= AppCapture;

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment