Quick start for UI tests with XCTest

XCTest is introduced by XCode 5 for iOS 7, but until XCode 8 with iOS 10, XCTest became the default and the only choice if you want to conduct UI testing, the previous UIAutomation is deprecated at that time.

Prerequisite:

  • XCode and XCode command line tools should be installed.

  • Download the sample XCOde project from here, unzip and open the Swift project.

Get Started for XCTest:

  1. Add a new Target of iOS UI Testing through “File”->”New”->”Target…”->”iOS UI Testing Bundle”, and input all the required info.

  2. Use the red dot (named “Record UI Test”) on the bottom to start recording, once finished, click it again to complete.

  3. Add assertions to the ends of each scenarios, such as the script in the “UIKitCatalogUITests.swift” of XCTestSample.zip.

    For the detailed usage of assertions, you can reference to the “Test Assertions” in the XCTest Topics.

  4. It’s hard to just check the text on some labels, so if we want to check it, we need to add value to identifier of the label, then in the testing script, using following lines:

    1
    2
    3
    let label=app.staticTexts["label_identifier"]
    XCTAssertEqual(label.label,"label_text")
    XCTAssert(app.staticTexts["label_text"].exists)
  5. If you want to know more usage of the XCTest, you can reference to the Cheat Sheet.

Quick start for InSpec and ServerSpec also comparison of them

InSpec and ServerSpec are Infrastructure Testing tools based on Ruby. InSpec is newly added into ThoughtWorks Tech Radar.

Prerequisite:

RVM, Ruby (>2.2) and rubygems should be installed.

Get Started for InSpec:

  1. Install InSpec.

    gem install inspec

  2. Write and run the InSpec script according to its API document.

    You can reference the script here.

    Using inspec exec inspec.rb to run and check the result.

    As you can see the script is pretty much the same as the one we used for ServerSpec, you can reference that script here. The instruction of running it is in this article.

    And there is another official article talking about the migration from ServerSpec to InSpec (we can also see the differences of resources between the two).

  3. To generate a json file as the test result, we run inspec exec sample_inspec.rb --format json >report.

    And it will generate the test result named “report” every time the test runs.

Comparison between InSpec and ServerSpec:

  • InSpec has 98 types of resources but ServerSpec has only 41.

  • InSpec has more comprehensive documents, you can reference here, and it even has a series of detailed tutorials.

In general, I would suggest to use InSpec for Infrastructure Testing in new projects.

Configure ServerSpec to run tests against multiple hosts

We have learnt how to setup tests with ServerSpec in Quick start for ServerSpec and Testinfra, also comparison of them, but the folder structure can only support testing against one host. In the real world, we need to reuse the tests to test against multiple hosts, so we will see how we can do that.

Prerequisite:

ServerSpec setup is completed according to the steps in Quick start for ServerSpec and Testinfra, also comparison of them.

Configure ServerSpec to run tests against multiple hosts:

Actually we only need to change the Rakefile in the test folder. And its content should be changed to following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
require 'rake'
require 'rspec/core/rake_task'
hosts = %w(
host1
host2
)
task :spec => 'spec:all'
namespace :spec do
task :all => hosts.map {|h| 'spec:' + h }
hosts.each do |host|
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(host) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/{host_server}/*_spec.rb"
end
end
end

Please note, host1 and host2 are the hosts we would like to run our tests against, the reason we put host_server in above lines is because we have the folder named “host_server” in our test scripts structure.

You can reference the script here.

Integrate ServerSpec, Testinfra and ZAP with CentOS 7 Minimal

We already know how to setup tests with ServerSpec and Testinfra in Quick start for ServerSpec and Testinfra, also comparison of them and Quick start for integrating ZAP into CI. Now, let’s see how to integrate them with CentOS 7 Minimal.

Prerequisite:

  1. Install CentOS 7 Minimal successfully with root user setup and login.

  2. Enable network on CentOS 7 Minimal following this article.

    The detailed steps are:

    1) Open Network Manager through nmtui;

    2) Choose “Edit connection” and press Enter (Use TAB button for choosing options);

    3) Choose your network interfaces and click “Edit”;

    4) Choose “Automatic” in “IPv4 CONFIGURATION” and check “Automatically connect” checkbox, then press “OK” to quit from Network Manager;

    5) Reset network services through service network restart.

  3. Adjust the screen resolution following this article.

    The detailed steps are:

    1) Edit GRUB file: vi /etc/default/grub

    2) Append vga=792 to the line GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet,

    and you will have GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet vga=792;

    Check for the detailed GRUB VGA Modes, or scroll down to the bottom of this article.

    3) Add this change to GRUB configuration: grub2-mkconfig -o /boot/grub2/grub.cfg

    4) Reboot to take effect: reboot.

  4. Add a Yum source and update.

    1
    2
    yum install epel-release
    yum -y update
  5. Configure ssh key to connect server easily.

    • Generate ssh key: ssh-keygen -t rsa
    • Copy ssh public key to server: ssh-copy-id -i ~/.ssh/id_rsa.pub username@server
    • Add following lines to ~/.ssh/config.

      1
      2
      Host server
      User username

Quick start for ServerSpec:

  1. Install Ruby.

    yum install ruby

  2. Install RubyGems.

    yum install rubygems

  3. Install Rake.

    gem install rake

  4. Install ServerSpec.

    gem install serverspec

  5. Initial ServerSpec folder with basic settings. Please note the server will be set in this step.

    serverspec-init

  6. Write and run the ServerSpec script according to its API document.

    You can reference the script here.

    Using rake spec under the test folder to run and check the result.

  7. To run specific test rather than the entire test suite.

    Using rake spec spec/host_server/sample_spec.rb under the test folder to run and check the result.

Quick Start for Testinfra:

  1. Python 2.7 is installed by default, so we just need to install Pip.

    -y install python-pip
    1
    pip install --upgrade pip
  2. Install Testinfra and Paramiko.

    pip install testinfra
    pip install paramiko

  3. Write and run the Testinfra script according to its API document.

    You can reference the script here.

    Using testinfra testinfra_test.py to run and check the result.

  4. Some useful arguments we can use to make the test result more clear.

    Instead of using testinfra testinfra_test.py directly, we can add some arguments, such as -q, -s, --disable-warnings and --junit-xml.

    • The argument -q will run Testinfra in quiet mode, with less info exposed
    • The argument -s will let Testinfra capture No pre-test info
    • The argument --disable-warnings will disable warnings during Testinfra runs
    • The argument --junit-xml will export Testinfra test result into a xml file

    After adding those arguments, the command should be look like testinfra -q -s --disable-warnings testinfra_test.py --junit-xml=report.xml

  5. Now we can run Testinfra against the server using testinfra -q -s --disable-warnings --ssh-config=/Path/to/ssh/config --hosts=server testinfra_test.py --junit-xml=report.xml.

Quick Start for ZAP:

  1. Install JDK.

    yum install java-1.8.0-openjdk*

  2. Download ZAP installation script.

    wget https://github.com/zaproxy/zaproxy/releases/download/2.6.0/ZAP_2_6_0_unix.sh

  3. Change permission of the installation script and execute it.

    1
    2
    chmod 777 ZAP_2_6_0_unix.sh
    ./ZAP_2_6_0_unix.sh
  4. Install required libraries.

    1) Install Selenium-WebDriver

    gem install selenium-webdriver

    2) Install IO

    gem install io

    3) Install Rest-Client

    1
    2
    yum install gcc-c++
    gem install rest-client

    4) Install RSpec

    gem install rspec

    5) Install and configure the headless Firefox

    1
    2
    3
    4
    yum -y install firefox Xvfb libXfont Xorg
    yum -y groupinstall "X Window System" "Desktop" "Fonts" "General Purpose Desktop"
    Xvfb :99 -ac -screen 0 1280x1024x24 &
    export DISPLAY=:99

    6) Download and setup geckodriver.

    1
    2
    3
    wget https://github.com/mozilla/geckodriver/releases/download/v0.18.0/geckodriver-v0.18.0-linux64.tar.gz
    tar -xvzf geckodriver-v0.18.0-linux64.tar.gz
    mv geckodriver /usr/lib64

    7) Add following lines to ~/.bash_profile.

    $PATH=$PATH:/usr/lib64

    And run source ~/.bash_profile.

    8) Alternatively, we can use Chromedriver:

    i) Create a file called /etc/yum.repos.d/google-chrome.repo and add the following lines of code to it.

    1
    2
    3
    4
    5
    6
    [google-chrome]
    name=google-chrome
    baseurl=http://dl.google.com/linux/chrome/rpm/stable/$basearch
    enabled=1
    gpgcheck=1
    gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

    ii) Check whether the latest version available from the Google�s own repository using yum info google-chrome-stable

    iii) Update yum using ‘yum update’

    ix) Install Chrome using yum install google-chrome-stable unzip

    x) Download Chromedriver using wget https://chromedriver.storage.googleapis.com/2.32/chromedriver_linux64.zip

    xi) Unzip Chromedriver using unzip chromedriver_linux64.zip

    xii) Move Chromedriver to a place in $PATH using mv chromedriver bin/

  5. Using ruby add_assertions_to_check_zap_result.rb to run and check the result.

    You can reference the script here.

GRUB VGA Modes

Colour Depth 640x480 800x600 1024x768 1280x1024 1400x1050 1600x1200
8 (256) 769 771 773 775
15 (32K) 784 787 790 793
16 (65K) 785 788 791 794 834 884
24 (16M) 786 789 792 795

Quick start for integrating ZAP into CI

OWASP ZAP is widely used security testing tool and it’s open sourced. Let’s see how we can integrate it with our automated functionality testing in CI.

Prerequisite:

RVM, Ruby and rubygems are installed.

Get Started:

  1. Download ZAP from OWASP_Zed_Attack_Proxy_Project and install it.

  2. Install Selenium-WebDriver, IO, Rest-Client and RSpec gems:

    1) Install Selenium-WebDriver

    gem install selenium-webdriver

    2) Install IO

    gem install io

    3) Install Rest-Client

    gem install rest-client

    4) Install RSpec

    gem install rspec

  3. Write and run a basic Selenium-WebDriver script using Ruby. The script is the same as in my previous Gitbook BDD with PageObject.

    The script will be like following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    require 'selenium-webdriver'
    driver = Selenium::WebDriver.for :firefox
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    driver.quit

    You can reference the script here.

    Using ruby simple_script.rb to run and check the result.

  4. Add steps to start ZAP and proxy the testing script using ZAP.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    require 'selenium-webdriver'
    + require 'io/console'
    + system("pkill java") #To close any existing ZAP instance.
    + system("pkill firefox") #To close any existing Firefox instance.
    + IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    + p "OWASP ZAP launch completed"
    + sleep 5 #To let ZAP start completely
    + profile = Selenium::WebDriver::Firefox::Profile.new
    + proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    + profile.proxy = proxy
    + options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    + driver = Selenium::WebDriver.for :firefox, options: options
    - driver = Selenium::WebDriver.for :firefox
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    driver.quit

    You can reference the script here.

    Using ruby add_zap_start.rb to run and check the result.

  5. Read the test results from ZAP.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    require 'selenium-webdriver'
    require 'io/console'
    + require 'rest-client'
    system("pkill java") #To close any existing ZAP instance.
    system("pkill firefox") #To close any existing Firefox instance.
    IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    p "OWASP ZAP launch completed"
    sleep 5 #To let ZAP start completely
    profile = Selenium::WebDriver::Firefox::Profile.new
    proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    profile.proxy = proxy
    options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    driver = Selenium::WebDriver.for :firefox, options: options
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    + JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts" #To trigger ZAP to raise alerts if any
    + sleep 5 #Give ZAP some time to process
    + response = JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts", params: { zapapiformat: 'JSON', baseurl: "http://clients1.google.com", start: 1 } #Get the alerts ZAP found, please note the baseurl will exact match from the beginning
    driver.quit
    + RestClient.get "http://localhost:8080/JSON/core/action/shutdown" #Close ZAP instance

    You can reference the script here.

    Using ruby read_zap_result.rb to run and check the result.

  6. Set up assertions to check the Low Risks.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    require 'selenium-webdriver'
    require 'io/console'
    require 'rest-client'
    + require 'rspec/expectations'
    + include RSpec::Matchers
    system("pkill java") #To close any existing ZAP instance.
    system("pkill firefox") #To close any existing Firefox instance.
    IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    p "OWASP ZAP launch completed"
    sleep 5 #To let ZAP start completely
    profile = Selenium::WebDriver::Firefox::Profile.new
    proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    profile.proxy = proxy
    options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    driver = Selenium::WebDriver.for :firefox, options: options
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts" #To trigger ZAP to raise alerts if any
    sleep 5 #Give ZAP some time to process
    response = JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts", params: { zapapiformat: 'JSON', baseurl: "http://clients1.google.com", start: 1 } #Get the alerts ZAP found
    + response['alerts'].each {|x| p "#{x['alert']} risk level: #{x['risk']}"} #Extract the risks found
    + events = response['alerts']
    + low_count = events.select{|x| x['risk'] == 'Low'}.size #Count the Low Risks
    + expect(low_count).to equal(1) #Expecxt only one Low Risk
    driver.quit
    RestClient.get "http://localhost:8080/JSON/core/action/shutdown" #Close ZAP instance

    You can reference the script here.

    Using ruby add_assertions_to_check_zap_result.rb to run and check the result.

  7. Now we can trigger this script in any CI using command above.

Quick start for ServerSpec and Testinfra, also comparison of them

ServerSpec is to check servers are configured correctly through their actual state using RSpec tests. Testinfra is kind of a Serverspec equivalent in Python and based on Pytest.

Prerequisite:

  1. RVM, Ruby and rubygems are installed.

  2. Python and pip are installed.

  3. We would expect to run Testinfra and ServerSpec against the server rather than our local machine, so we may need to tweak a little bit by connecting the server with ssh key.

    • Generate ssh key: ssh-keygen -t rsa
    • Copy ssh public key to server: ssh-copy-id -i ~/.ssh/id_rsa.pub username@server
    • Add following lines to ~/.ssh/config.

      1
      2
      Host server
      User username

Get Started for ServerSpec:

  1. Install ServerSpec.

    gem install serverspec

  2. Initial ServerSpec folder with basic settings. Please note the server will be set in this step.

    serverspec-init

  3. Write and run the ServerSpec script according to its API document.

    You can reference the script here.

    Using rake spec under the test folder to run and check the result.

  4. To run specific test rather than the entire test suite.

    Using rake spec spec/host_server/sample_spec.rb under the test folder to run and check the result.

  5. To generate a html file as the test result, we can add t.rspec_opts = '--format html --out reports/rspec_results.html' in the Rakefile when new the Rake task. e.g.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    require 'rake'
    require 'rspec/core/rake_task'
    task :spec => 'spec:all'
    task :default => :spec
    namespace :spec do
    targets = []
    Dir.glob('./spec/*').each do |dir|
    next unless File.directory?(dir)
    target = File.basename(dir)
    target = "_#{target}" if target == "default"
    targets << target
    end
    task :all => targets
    task :default => :all
    targets.each do |target|
    original_target = target == "_default" ? target[1..-1] : target
    desc "Run serverspec tests to #{original_target}"
    RSpec::Core::RakeTask.new(target.to_sym) do |t|
    ENV['TARGET_HOST'] = original_target
    + t.rspec_opts = '--format html --out reports/test_results.html'
    end
    end
    end

    And it will generate the test result named “test_results.html” under “reports” folder every time the test runs.

Get Started for Testinfra:

  1. Install Testinfra and Paramiko.

    pip install testinfra
    pip install paramiko

  2. Write and run the Testinfra script according to its API document.

    You can reference the script here.

    Using testinfra testinfra_test.py to run and check the result.

  3. Some useful arguments we can use to make the test result more clear.

    Instead of using testinfra testinfra_test.py directly, we can add some arguments, such as -q, -s, --disable-warnings and --junit-xml.

    • The argument -q will run Testinfra in quiet mode, with less info exposed
    • The argument -s will let Testinfra capture No pre-test info
    • The argument --disable-warnings will disable warnings during Testinfra runs
    • The argument --junit-xml will export Testinfra test result into a xml file

    After adding those arguments, the command should be look like testinfra -q -s --disable-warnings testinfra_test.py --junit-xml=report.xml

  4. Now we can run Testinfra against the server using testinfra -q -s --disable-warnings --ssh-config=/Path/to/ssh/config --hosts=server testinfra_test.py --junit-xml=report.xml.

Comparison between ServerSpec and Testinfra:

Advantages of ServerSpec
  • More documents and community support (compared to Testinfra);

  • The scripts, test results and reports are more readable (Testinfra is based on Pytest, and can only export report to XML not JSON);

  • Although Testinfra support “sysctl” and it runs successfully on the server, the command in script can’t run, and prompts error “bash: sysctl: command not found”; may also occurs to other commands/resources (potential risk);

  • ServerSpec can support more resources and attributes for resources than Testinfra.

Advantages of Testinfra
  • Can support most common resources, such as Docker, File, Group, Service, Socket and etc (equivalent to ServerSpec);

  • Can show the actual value when validating the permission but failed, so the debug will be quicker.

Conclusion

If you can only choose Python as your programming language, you have to use Testinfra; otherwise, I recommend ServerSpec because of the benefits it can bring.

Quick Start for Docker

Installation

Download from https://www.docker.com/docker-mac

Register a Docker account

Register on https://www.docker.com/

Check Docker availability

Run following commands in Terminal
docker -v
docker -info

Pull a Docker image. e.g. “Ruby”

docker pull ruby

Check the ruby version in Docker image “Ruby”

docker run ruby ruby -v

Run Docker image at background and with an alias, e.g. “trail”

docker run -idt --name=trial ruby

Check all the running Docker images

docker ps

Attach to the running Docker image and execute commands in it

You can use docker attach trial
But it’s better to use docker exec -i -t trial sh

Reference
Docker — 从入门到实践

Quick start for Mocha + Chai + SuperTest in API Testing with Mock Server (moco) and Tips

There are many combinations to test APIs, but here we choose a commonly used combination for demonstration, the rests will be pretty much the same or likely. And we will use moco to quickly simulate a server.

Environment Settings:

  1. Install Homebrew

    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

  2. Install NodeJS

    brew install node

  3. Install Mocha, Chai, SuperTest, Exoress and Body-Parser

    1
    2
    3
    4
    5
    npm install -g mocha
    npm install chai
    npm install supertest
    npm install express
    npm install body-parser
  4. Download the moco standalone file from Github

  5. Since Mocha will only run scripts under “test” folder, so we have to create a folder named “test”.

Start a simple server and test on it:

  1. Create a json file named “user1.json”, and add following content to it:

    1
    2
    3
    4
    5
    6
    7
    8
    [
    {
    "response" :
    {
    "status" : "200"
    }
    }
    ]

    Please note: we only define the HTTP status code here for now.

  2. Start the Mock server using the command when under the “test” folder:

    java -jar moco-runner-0.11.1-standalone.jar http -p 3000 -c user1.json

    And you should see following lines in your Terminal:

    1
    2
    3
    10 Jul 2017 16:17:43 [main] INFO Server is started at 3000
    10 Jul 2017 16:17:43 [main] INFO Shutdown port is 56123
    10 Jul 2017 16:17:45 [nioEventLoopGroup-3-2] INFO Request received:
  3. You can open the URL http://localhost:3000/ in browser to check the response of the API, but currently since we havn’t set and response value except the status code, so you won’t see anything in the browser.

  4. Create a script file named “user1_test.js”, and add following to it:

    1) Add the alias to shorten the names for requiring libs:

    1
    2
    3
    var should = require('chai').should(),
    expect = require('chai').expect,
    supertest = require('supertest'),

    2) Setup the URL we want to test against:

    api = supertest('http://localhost:3000');

    3) Define a function to describe the feature we want to test:

    1
    2
    3
    describe('User', function() {
    });

    4) Define a detailed test step to execute the test script, by adding it in the content of function above:

    1
    2
    3
    it('should return a response with HTTP code 200', function(done) {
    api.get('').expect(200, done);
    });
  5. Run the test to check the result by using:

    mocha or mocha user1_test.js

    And you should see the result like:

    1
    2
    3
    4
    User
    ✓ should return a response with HTTP code 200
    1 passing (28ms)

    Great! It works.

  6. Now, let’s make sure it works as expected by change the expected HTTP code to 404, and run it again. You should see the following:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200
    0 passing (32ms)
    1 failing
    1) User should return a response with HTTP code 200:
    Error: expected 404 "Not Found", got 200 "OK"

    Awesome! It means the test really works. Now, revert the change.

  7. It’s time to add more parameters to make it a little complex.

    1) Add the formatter of HTTP request header in test script, and change the line of API test to following:

    api.get('').set('Accept', 'application/json').expect(200, done);

    2) Change the response of server to point to a specific URL, by adding the URI above the HTTP status code in “user1.json”, and the entire file should looks like:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    [
    {
    "request" :
    {
    "uri" : "/users/1"
    },
    "response" :
    {
    "status" : "200"
    }
    }
    ]

    Please note: don’t bother to restart the Mock Server. Once you save the json file with correct format moco provided, it will restart itself automatically. But if you save with any error, you have to fix it before start it, otherwise it won’t response correctly. For detailed moco usage and API docuemnts, please refer to the Documents section on moco Github page.

    3) Now, it’s time to change the test script accordingly. We need to change the line of API test to following:

    api.get('/users/1').set('Accept', 'application/json').expect(200, done);

    4) Let’s run the test script again, you should see the test pass:

    1
    2
    3
    4
    User
    ✓ should return a response with HTTP code 200 (177ms)
    1 passing (186ms)

    5) We can also add some text response into API response, like following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    [
    {
    "request" :
    {
    "uri" : "/users/1"
    },
    "response" :
    {
    "status" : "200",
    "json":
    {
    "text": "Not Empty"
    }
    }
    }
    ]

    6) Then change the test script correspondingly:

    1
    2
    3
    4
    5
    6
    7
    it('should return a response with HTTP code 200 and the response text should be "Not Empty"', function(done) {
    api.get('/users/1').set('Accept', 'application/json').expect(200).end(function(err,res){
    expect(res.body).to.have.property("text");
    expect(res.body.text).to.equal("Not Empty");
    done(err);
    });
    });

    The test result should show “Pass”.

    7) What if we change the expected HTTP status code ot the expected response text? Let’s try.

    After change the expected HTTP status code to 500, once the test run, the result will be:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200 and the response text should be "Not Empty"
    0 passing (44ms)
    1 failing
    1) User should return a response with HTTP code 200 and the response text should be "Not Empty":
    Error: expected 500 "Internal Server Error", got 200 "OK"

    Revert the change and change the expected response text to “Empty”, once the test run, the result will be:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    User
    1) should return a response with HTTP code 200 and the response text should be "Not Empty"
    0 passing (46ms)
    1 failing
    1) User should return a response with HTTP code 200 and the response text should be "Not Empty":
    Uncaught AssertionError: expected 'Not Empty' to equal 'Empty'
    + expected - actual
    -Not Empty
    +Empty

    Don’t forget to revert the change.

    8) The test script works as expected, so next, we will make it more complex.

Make the server more complex and of course, test against it:

  1. Append the API response to only respond to specific request, change the API response to:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    [
    {
    "request" :
    {
    "uri" : "/users/1",
    "json": {
    "name": "Yale"
    }
    },
    "response" :
    {
    "status" : "200",
    "json":
    {
    "gender": "Male"
    }
    }
    }
    ]
  2. Change the test script to reflect the changes:

    1
    2
    3
    4
    5
    6
    7
    it('should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"', function(done) {
    api.get('/users/1').set('Accept', 'application/json').send({name:"Yale"}).expect(200).end(function(err,res){
    expect(res.body).to.have.property("gender");
    expect(res.body.gender).to.equal("Male");
    done(err);
    });
    });
  3. Once run the test script, it will Pass.

  4. If we change the value in the request, like change “Yale” to “Yong”, once test script runs, it will show:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    0 passing (47ms)
    1 failing
    1) User should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name":
    Uncaught AssertionError: expected {} to have property 'gender'
  5. If we change the value in the request, like change the 2 “gender” to “sex”, once test script runs, it will show:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    0 passing (39ms)
    1 failing
    1) User should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name":
    Uncaught AssertionError: expected {} to have property 'sex'
  6. If we change the value in the request, like change “Male” to “Man”, once test script runs, it will show:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    0 passing (44ms)
    1 failing
    1) User should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name":
    Uncaught AssertionError: expected {} to have property 'gender'
  7. You may notice that in the test script, we still use GET method rather than PUT or POST, but we still can let the test Pass. Now, let’s strict it. In the API response, we need to change it to following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    [
    {
    "request" :
    {
    "uri" : "/users/1",
    "method": "get",
    "json": {
    "name": "Yale"
    }
    },
    "response" :
    {
    "status" : "500",
    "json":
    {
    "gender": "Male"
    }
    }
    },
    {
    "request" :
    {
    "uri" : "/users/1",
    "method": "put",
    "json": {
    "name": "Yale"
    }
    },
    "response" :
    {
    "status" : "200",
    "json":
    {
    "gender": "Male"
    }
    }
    }
    ]
  8. Simply change the HTTP method to “put” in test script and check the result:

    In the test script:

    1
    2
    3
    4
    5
    6
    api.put('/users/1').set('Accept', 'application/json').send({name:"Yale"}).expect(200).end(function(err,res){
    expect(res.body).to.have.property("gender");
    expect(res.body.gender).to.equal("Male");
    done(err);
    });
    });

    And the test result:

    1
    2
    3
    4
    User
    ✓ should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    1 passing (41ms)
  9. Let’s see what if we change the HTTP method to “get” again in test script and check the result:

    In the test script:

    1
    2
    3
    4
    5
    6
    api.get('/users/1').set('Accept', 'application/json').send({name:"Yale"}).expect(200).end(function(err,res){
    expect(res.body).to.have.property("gender");
    expect(res.body.gender).to.equal("Male");
    done(err);
    });
    });

    And the test result:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    0 passing (39ms)
    1 failing
    1) User should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name":
    Error: expected 200 "OK", got 500 "Internal Server Error"

Final stage - more advanced usage:

  1. If we want to add more API responses, do we need to add them into one json file? Certainly not, let’s see how we can do that.

    1) Create another json file named “user2”, and add following content to it:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [
    {
    "request" :
    {
    "uri" : "/users/2",
    "method": "post",
    "json": {
    "name": "Yong"
    }
    },
    "response" :
    {
    "status" : "500",
    "text" : "Internal Server Error"
    }
    }
    ]

    2) Create another js file named “user2_test.js”, and add following into its content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    var should = require('chai').should(),
    expect = require('chai').expect,
    supertest = require('supertest'),
    api = supertest('http://localhost:3000');
    describe('User', function() {
    it('should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"', function(done) {
    api.post('/users/2').set('Accept', 'application/json').send({name:"Yong"}).expect(500).end(function(err,res){
    if (err) return done(err);
    expect(res.error.text).to.equal("Internal Server Error");
    done(err);
    });
    });
    });

    3) Once you run the test script, it will show:

    1
    2
    3
    4
    User
    ✓ should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"
    1 passing (41ms)

    4) If we only change the expected HTTP status code in the test script to 404, the result will be:

    1
    2
    3
    4
    5
    6
    7
    8
    User
    1) should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"
    0 passing (45ms)
    1 failing
    1) User should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name":
    Error: expected 404 "Not Found", got 500 "Internal Server Error"

    5) And if we only change the expected response message to “Not Found”, the result will be:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    User
    1) should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"
    0 passing (43ms)
    1 failing
    1) User should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name":
    Uncaught AssertionError: expected 'Internal Server Error' to equal 'Not Found'
    + expected - actual
    -Internal Server Error
    +Not Found

    6) Revert all changes back again, and create another json file named “combined.json”, and add following:

    1
    2
    3
    4
    5
    6
    7
    8
    [
    {
    "include": "user1.json"
    },
    {
    "include": "user2.json"
    }
    ]

    And start the Mock Server using java -jar moco-runner-0.11.1-standalone.jar http -p 3000 -g combined.json

    7) Use Mocha to run all the test scripts:

    mocha *.js

    And you should see:

    1
    2
    3
    4
    5
    6
    7
    User
    ✓ should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"
    User
    ✓ should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"
    2 passing (49ms)

    8) If you want to run a single test script to test all steps, you can create a js file named “users_test.js”, and add following content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    var should = require('chai').should(),
    expect = require('chai').expect,
    supertest = require('supertest'),
    api = supertest('http://localhost:3000');
    describe('User', function() {
    it('should return a response with HTTP code 200, and the response text should be "Male" if request the gender of "Yale" as "name"', function(done) {
    api.put('/users/1').set('Accept', 'application/json').send({name:"Yale"}).expect(200).end(function(err,res){
    expect(res.body).to.have.property("gender");
    expect(res.body.gender).to.equal("Male");
    done(err);
    });
    });
    it('should return a response with HTTP code 500 and its content if request the gender of "Yong" as "name"', function(done) {
    api.post('/users/2').set('Accept', 'application/json').send({name:"Yong"}).expect(500).end(function(err,res){
    if (err) return done(err);
    expect(res.error.text).to.equal("Internal Server Error");
    done(err);
    });
    });
    });

    9) Run the test using mocha users_test.js, it will show the same result as above.

Tips:

  1. If you want to test against OAuth protocol, you need to change SuperTest to unirest. Thanks to QiLei.

Later on:

Since you already can run your test scripts smoothly through command line, you can simply add them to CI tools, such as Jenkins and such.

Have fun!

You can find all the test scripts from here.

Quick start for appium (iOS and Android)

There are many changes compared to the early version of appium, so I would like to start from scratch to build a simple testing script for appium. For further reading about how to apply Page Object and BDD to appium, you can read my previous article Cookbook for appium and BDD with PageObject.

Environment Settings:

  1. Install XCode and its command line tool.

    To install XCode command line tool, you can use the following command in Terminal:

    xcode-select --install

    And download the OS versions you want to work with.

  2. It’s better to choose Ruby to write the test script as it’s easy to get familiar with. And to install/upgrade it, we can use following commands:

    1) Install Homebrew

    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

    2) Install Ruby

    brew install ruby

  3. Install appium and related gems:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    brew install libimobiledevice --HEAD
    brew install node
    npm install -g ios-deploy
    sudo gem install xcpretty
    npm install appium-doctor -g
    npm install -g appium
    npm install wd
    npm install appium-xcuitest-driver
    brew install carthage
    gem install --no-rdoc --no-ri bundler
    gem install --no-rdoc --no-ri appium_lib
    gem install --no-rdoc --no-ri appium_console
  4. Install JDK for Android:

    Download from Java SE Development Kit 8 - Downloads and install.

  5. Install Android Studio from Download Android Studio and SDK Tools | Android Studio and download the OS version you want to work with.

  6. Configure the bach_profile to set environment parameters:

    Edit your ~/.bash_profile, if you don’t have one, just create it and add following lines to it:

    1
    2
    3
    4
    5
    6
    7
    8
    export ANDROID_HOME=/Users/$your_name/Library/Android/sdk
    export ANDROID_SDK=$ANDROID_HOME
    PATH=$PATH:$ANDROID_HOME/build-tools
    PATH=$PATH:$ANDROID_HOME/platform-tools
    PATH=$PATH:$ANDROID_HOME/tools
    export JAVA_HOME="`/System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/java_home`"
    PATH=$PATH:$JAVA_HOME/bin
    export PATH
  7. Run appium-doctor in Terminal to see if everything works fine. (You need to see there are all green.)

Generate the apps to be tested:

  1. Download the iOS UIKit Catalog source code from UIKitCatalog.

    1) Unzip it and open the project in the Swift folder.

    2) Build it in XCode by selecting the OS version you would like to test.

    3) Expand the “Products” folder in the project structure, and select the app generated.

    4) Right click it and select “Show in Finder”, and copy the app with extension as .app to your test folder for future use.

  2. Download the “TextSwitcher” from “Import an Android code sample” in Android Studio welcome screen.

    1) Change the “minSdkVersion” from 7 to 9 in “build.gradle (Module:Application)” and build the app.

    2) Right click the root of project structure and select “Reveal in Finder”, and copy the app with extension as .apk in “/Users/$your_name/AndroidStudioProjects/TextSwitcher/Application/build/outputs/apk” to your test folder for future use.

Identify elements using appium GUI:

  1. Download the appium GUI through their website;

  2. Start appium GUI after install;

  3. Start the server by default;

  4. Start a new session;

  5. Input all capabilities you want to use in “Desired Capabilities”, such as:

    1
    2
    3
    4
    platformName text iOS
    platformVersion text 10.2
    deviceName text iPhone 7
    app text /Users/$your_name/Documents/UIKitCatalog.app

    Above shows the basic capabilities you must pass to appium to run the app. (For Android, it’s basically the same, unless you first activity is not MainActivity so you need to set “appActivity” pointed to the right one.)

  6. Then you can actually “Start Session”. (For Android, you need to start your emulator before this, appium won’t automatically start Android emulator like it does for iOS.)

  7. Wait for some time, another window would popup. In this window, you should see the app’s visual and could inspect elements in it.

  8. Once click the element on app UI in the inspector window, you can see the properties of the element shows on the right side.

    1) Try to use id to locate element as possible.

    2) The “Tap”, “Send Keys” and “Clear” button in the right “Selected Element” panel will simulate the corresponding action, and you will see the consequences in the app UI.

    3) You can use “Back” on the top of this window to simulate hardware Back action, and next to it is Refresh.

    4) The eye icon is very useful if you would like to record all your steps into a script. Any action after you click that icon will be recorded, until it’s been clicked again.

    5) You can choose the language you want to record, and you can select to show/hide the supporting code (the code to support you start and end tests rather than the test steps), copy and delete the test codes.

    6) The last one on the top will close this window and entire session.

  9. With the inspector, you have recorded some script, or you have written your own code with the inspector to locate elements. The code should look like following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    require 'rubygems'
    require 'appium_lib'
    caps = {}
    caps["platformName"] = "iOS"
    caps["platformVersion"] = "10.2"
    caps["deviceName"] = "iPhone 7"
    caps["app"] = "/Users/$your_name/Documents/UIKitCatalog.app"
    opts = {
    sauce_username: nil,
    server_url: "http://localhost:4723/wd/hub"
    }
    driver = Appium::Driver.new({caps: caps, appium_lib: opts}).start_driver
    driver.navigate.back
    el1 = driver.find_element(:xpath, "//XCUIElementTypeCell[14]")
    el1.click
    el2 = driver.find_element(:xpath, "//XCUIElementTypeCell[1]/XCUIElementTypeTextField")
    el2.send_keys "123"
    el3 = driver.find_element(:xpath, "//XCUIElementTypeCell[2]/XCUIElementTypeTextField")
    el3.send_keys "456"
    driver.quit

    Save the script to a .sh file, such as test.sh.

Run appium test:

  1. Now we can run the script through the command line (Terminal).

  2. After we close appium GUI, we need to run appium in a Terminal tab, then open your test folder. Under the folder, run following command:

    ruby test.sh

  3. You should see the simulator started and test runs.

  4. But we DO NOT have any assertion. Let’s add some.

    1) Install RSpec through

    gem install rspec

    2) Include RSpec and its matchers in the “Require” section

    1
    2
    3
    4
    require 'rubygems'
    require 'appium_lib'
    require 'rspec'
    include RSpec::Matchers

    3) Add assertions to the test, like:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    driver.navigate.back
    el1 = driver.find_element(:xpath, "//XCUIElementTypeCell[14]")
    el1.click
    el2 = driver.find_element(:xpath, "//XCUIElementTypeCell[1]/XCUIElementTypeTextField")
    el2.send_keys "123"
    el2.value.should == "123"
    el3 = driver.find_element(:xpath, "//XCUIElementTypeCell[2]/XCUIElementTypeTextField")
    el3.send_keys "456"
    el3.value.should == "456"

    4) To make sure the assertions really works, we can alter like:

    el3.value.should == "345"

    5) And run it again, it should show following in Terminal:

    1
    2
    /usr/local/lib/ruby/gems/2.4.0/gems/rspec-support-3.6.0/lib/rspec/support.rb:87:in `block in <module:Support>': expected: "345" (RSpec::Expectations::ExpectationNotMetError)
    got: "456" (using ==)

    6) Now we can revert the change, and complete the test script.

    7) The final test script should looks like:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    require 'rubygems'
    require 'appium_lib'
    require 'rspec'
    include RSpec::Matchers
    caps = {}
    caps["platformName"] = "iOS"
    caps["platformVersion"] = "10.2"
    caps["deviceName"] = "iPhone 7"
    caps["app"] = "/Users/yhuang/Documents/appium/UIKitCatalog.app"
    opts = {
    sauce_username: nil,
    server_url: "http://localhost:4723/wd/hub"
    }
    driver = Appium::Driver.new({caps: caps, appium_lib: opts}).start_driver
    driver.navigate.back
    el1 = driver.find_element(:xpath, "//XCUIElementTypeCell[14]")
    el1.click
    el2 = driver.find_element(:xpath, "//XCUIElementTypeCell[1]/XCUIElementTypeTextField")
    el2.send_keys "123"
    el2.value.should == "123"
    el3 = driver.find_element(:xpath, "//XCUIElementTypeCell[2]/XCUIElementTypeTextField")
    el3.send_keys "456"
    el3.value.should == "456"
    driver.quit
  5. Now it’s all done.

Please Note:

  1. This is just a sample to run test, but we actually should write it in BDD way with Page Object.

  2. It should also be integrated with CI.

  3. If you want to use Appium to test Android, I recommend you to use Genymotion rather than AVD, even you are using x86 kernel for AVD.

  4. You can reference to the app and script for UIKitCatalog and the app and script for Android Calculator.

国内交付项目新手PM生存指南

长期以来一直做国外的交付项目,对于国内的项目总是很怵,因为总觉得做国内项目就需要像电视剧中一样边谈项目边喝酒应酬,而且还天天加班,所以一直以来没有机会也没有勇气接近国内项目。

但是,机缘巧合,最近在国内项目上摸爬滚打了一段时间,感觉可以把自己的经历和经验记录下来,跟大家分享一下。

也许大家接触的都是国内客户,但是不同的客户会有自己独特的风格,在下文中我会按照交付项目开展的时间顺序为大家介绍它们通用的和特有的问题及经历。

#售前阶段

在正式开始项目前,我们需要对项目管理的“金三角”(时间、范围和成本)有明确的认识。但是在售前阶段这些问题的答案都很模糊,需要我们逐步分析和确认。

一般会通过以下三个步骤来解决。

###第一步:了解背景

我们首先需要做的是找到这个项目机会的来源,也就向销售的同事获取尽可能多的信息,例如这个机会是哪个客户给的,他/她在客户组织架构中的位置是什么样的,他/她对我们有什么样的建议和想法,这个机会所在部门的背景情况和重视程度,这个项目现在遇到的问题和以后有可能遇到的问题,我们去谈需求时需要联系谁,联系方式是什么。

拿到了这些消息后,我们还可以利用现有的信息,在公司内部找和客户同一部门或者类似部门打过交道的同事了解更多的背景和技术知识,尽可能找一个在同一客户下工作过,最好是客户熟知并且对客户有影响力的同事,在洽谈需求时能一起参加,这不仅会让我们更有底气,能更准确地了解客户需求,也会让客户增加更加信任。

这些信息我们最好能够记录并保存下来,而且需要系统性地整理这些信息及注意事项。

在这一阶段我们了解的更多都是全局的内容,而在项目中我们往往忽视了从全局观出发,如何能更好地解决问题。

###第二步:拜访客户

有了这些信息,下一步就是和客户约好时间,进行拜访

第一次进行客户拜访,通常双方需要先进行相互介绍,我们一般也会派出项目的PM和Tech Lead参与拜访,拜访的内容主要是了解需求。但是第一次谈需求,我们需要注意更多从业务和价值角度切入,而不要陷入技术细节。这时我们往往会发现自己之前设想的和客户描述出来的差别很大。但是我们要注意,不要过于坚持自己的想法,而需要根据客户的描述,通过自己的问题一步一步地引导客户说出他们的痛点和期望,从而能使项目的目标清晰。

在这一阶段通常我们也要向客户展示我们的技术实力,让客户坚定选择我们的决策是正确的。采取的方式可以有针对项目相关技术的演讲或者演示。总的来说,最好把第一次拜访控制在一个小时左右,这样我们一方面得到了相关的项目知识,另一方面,为下一次会面向客户展示具体的项目相关技术实力提供了机会。

在有了项目业务需求和痛点之后,我们需要修正自己之前的想法和设计,根据最新的情况进行总结,并且最好得到客户的确认。

接着,根据这些信息,我们就可以着手进行技术架构的设计以及风险的评估,为后续的拜访明确目标,使会议更高效。之后的拜访或者会议都是一步一步的细化过程,包括确定是用什么技术栈,什么架构,客户如何与我们交付团队合作等等。不过我们需要控制每次会议以及为项目准备时间的投入,以及这一阶段持续的时间。因为持续时间长不仅会让客户感觉我们不专业,也会暗示我们不理解项目。再说,如果项目最后没有谈成,也是对于公司资源的浪费。

###第三步:合同形式

上面我们说的都是技术方面的,售前阶段有很重要的一点是需要了解客户想以什么样的方式和我们签合同。因为咨询和交付项目的工作方式以及交付物都是不一样的,如果我们以交付项目的形式签订合同,而以咨询的形式工作,那在项目结束时,我们会面临如何验收,如何提供交付物的问题。这也很有可能让客户在工作验收和领导汇报时陷入被动,因为实际完成的和合同上签署的内容是不一致的。

如果我们能够顺利解决上面所说的问题,我们就可以等着进行投标等工作了。下面我们就来看看神秘的投标阶段都会有什么意想不到的问题吧。

#投标阶段

首先我们需要根据标书的要求,完成相关系统的设计。这些设计并不只是技术架构,还需要包含对于客户的理解,对于项目现状的理解,对于项目远景的理解,以及由此而产生的技术架构和实施计划。同时,我们还需要展现我们对于同一类型项目的经验和成功案例。

编写标书的技术方案看起来简单,只要写命题作文就可以了。

标书的大致目录

图 1.1 - 标书的大致目录

但是,标书需要我们协同配合,尤其是需要售前同事和交付团队进行密切沟通,确保交付团队在得到客户真实意图,准确把握客户的期望和优先级的情况下,出具架构合理、充分考虑到客户需求和成本的方案。如果在投标前期给客户做过Inception,我们还需要和Inception团队全力合作,进行知识的有效传递(最好能让Inception团队来编写标书)。如果能从Inception的团队中获得例如线框图(如图1.4),用户画像,用户旅程,推荐的技术架构甚至Epic Story,对于确定项目范围和编写技术方案来说会助力很多。

线框图和用户旅程

图 1.2 - 线框图和用户旅程

###关键点一:技术栈和技术架构

在编写标书时的关键之一在于技术栈和技术架构。在我们根据客户的需求确定这两项之后,我们还需要进行架构评审,以便集思广益,减少架构设计时引入交付风险,同时也可以更好的权衡交付压力和技术卓越。我们不仅需要考虑使用什么技术来满足客户的需求,还需要考虑客户运行项目的成本,甚至客户是否需要付出很多成本招聘到合适的运维人员等。

技术架构

图 1.3 - 技术架构

###关键点二:估算工作量

确定了技术栈和技术框架之后就需要对工作量进行估算了。在进行估算时我们不能按照特定的人员能力进行估算,而需要按照平均开发水平估算交付项目的人天,因为我们不能确定谁来进行开发。由于和客户签订合同通常是以人天为单位进行计算的,所以按照我们通常使用的以难易程度为点数进行估算也是可以的,不过最后也需要转换成人天。在估算出工作量之后还需要和售前同事权衡架构设计的精密性和交付周期。此外,对于有些客户,我们评估了工作量后,还需要和他们针对每一个功能的工作量进行PK,确定我们的工作量没有水分,因为我们更需要对项目业务和技术都有深入的了解。

###关键点三:确定人员计划

编写标书的另一个关键是确定人员计划,但是这一点依赖于前面两项。在确定了技术方案后,我们需要根据具体的技术需求,确定需要什么样的角色,例如使用什么技术的前后端开发,是否需要iOS/Android的开发,是否需要UX等;以及根据对于交付项目的工作量估算,确定这些人员上项目和下项目的时间,并且最好能留一点buffer,以应对需求变化和请假等不确定因素,同时还需要考虑到项目交付后一段时间的维护计划。项目的利润也是PM需要重点关注的,这也会影响到选什么样的人加入项目。由于我们通常并不会获得完全适合项目技术的人员参加项目,所以作为PM也得负责人员的培养,以达到项目的要求。

人员计划

图 1.4 - 人员计划

此外,我们不仅需要充分考虑以上三个方面,还需要突出我们的亮点,使客户能更直观地了解我们的价值所在。

在编写标书前我们就得设定好心理预期,编写标书,尤其是第一次编写标书时,通常都会有很多地方都要修改很多遍。虽然修改很累很繁琐,但是只有做好了这一步,才能让后续工作更顺利。

##讲标阶段

投标不仅包含编写标书,还包括讲标的环节。两者中间实际上还有“投标”这一活动,但是一般是由特定的同事去投标的,所以作为PM和技术人员,通常在这一阶段更关注于投标之后的讲标。

顾名思义,讲标就是在客户现场把投递的标书讲一遍。但实际上前后的准备工作都很多。

首先,我们要准备讲标标书。大家会觉得奇怪,刚才不是写好标书都投了吗?还写什么讲标标书啊?其实一般投标的纸质版标书会有150~200页的内容,而讲标时间通常都是半小时,再加上15分钟左右的问答,所以在30分钟内不可能讲完标书上的所有内容,因此我们需要对标书进行精简,并且多以图表和图片的形式进行展示。这一过程也会和编写标书一样会修改很多次。

其次,还需要进行讲标的演练。需要有人模仿客户,提出各种问题和挑战,让讲标的同学尽可能做好应对各种不同的听众和情况,随机应变,使整个讲标过程在控制中。同样演练也会进行多次,以使演讲技巧和临场能力得到锻炼。

再次,在正式讲标前,我们需要准备好电脑电源,翻页笔,显示器转接头,装有讲标文件的优盘,各种格式的讲标文件(Keynote、PPT、PDF等)等设备和资料,避免任何出错的可能。并且我们需要提前到场,如果离客户远一定要留好充足的时间,避免迟到而影响讲标效果。

最后,在讲标时,我们只要控制好节奏,把握好时间,按照演练的内容正常发挥就可以了。

在投标和讲标之后是商务谈判和签订合同等活动,由于一般大家都不会接触(其实是本人并没有接触到),所以就略过啦。

等前期准备都完成了,就可以开始正式的交付阶段了。

#交付阶段

在交付阶段作为PM关注的点很多,所以分为几个方面来说。

###在这一阶段,我们还需要和客户沟通了解以下方面:

  1. 了解客户项目的上线流程:比如上线是否需要申请域名,是否有上线前的测试和安全检查等,内部工作流程和规范(需要使用的各种工具,以及是否每次会议结束后都需要发送会议纪要等),文档和报告(日报,周报以及项目交付时需要提供的诸如用户手册、维护指南等文档)。

  2. 了解客户如何定义项目成功:因为对于客户的业务部门、技术部门和我们交付团队,成功的定义并不一样,但是需要我们了解这些不一致,然后相互协调相互妥协,尽量让各方都满意,使项目在交付后每一方都认为项目成功了。

  3. 为客户提供统一的接口人:这并不是说交付团队只有PM和客户沟通而是说不同类型的事物至少客户可以找同一个人解决问题,而不是每次遇到问题,都是需要通过询问才能找到合适的人解决,甚至没有人负责解决问题。例如对于项目管理和人员安排,客户可以找PM;而对于技术架构和实现方面的问题,可以找Tech Lead。

除了了解上述方面,对于客户,尤其是业务部门,通常来说对于交付团队来说都很不放心,担心进度和实现效果。特别是对于第一次合作的团队来说更是如此。那我们能怎么样改变这种情况呢?

  • 首先我们可以通过每天的站会让客户了解我们工作的内容,也便于我们快速解决问题。不过我们需要控制站会的形式,不能把站会变成向汇报或者进行提问的方式。

  • 其次我们可以通过故事墙(物理和电子的)让客户实时看到项目的进展和项目的状态,让他们觉得项目可控。

  • 再次是需要适时而且适度向客户寻求解决问题。这样可以让客户更有参与感和成就感,不过需要控制好度。尽量避免交付团队自己开小会,让客户觉得自己被排除在团队之外。

其实,最关键的还是我们开发的软件。不仅是客户,交付团队也是看到了实际的产品之后,才会对项目进度和质量更有信心。

###对于我们自己团队来说,需要注意以下几点:

  1. 和很多公司不同,我们开始交付之后,并不是一上来就直接开发功能,而是会有迭代0的环节,包括交付范围的细化,迭代1的故事卡分析,制定发布计划,进行基础设施的搭建,验证技术可行性等任务。这可以让我们在项目伊始就尽量排除、发现和识别架构和框架以及技术可行性上的问题,迅速进行调整。

  2. 我们还需要和客户建立良好的关系,让他们意识到我们是在为同一个目标而努力,需要通力合作才能达到那个目标,所以各方需要相互配合与磨合。

  3. 在坚持底线和原则的基础上适当妥协,尽量保证客户的满意度。但是在无法满足或者对于项目进度、质量、人员、成本以及和客户的关系有影响的方面,需要及时和交付保障的同事进行沟通和商量,尽量在萌芽阶段解决问题。

  4. 关注并管理客户的期望。如果项目开发慢了,要及时向客户解释原因,并提供解决方案以及寻求帮助;如果交付快了,不仅需要介绍我们通过何种努力才能加快进度,同时需要介绍可能存在的问题以及可以改进的内容。

  5. 在开发过程中控制好项目的节奏,不能让团队一直加班或者没有事做,而需要让工作有节奏。对于客户可能提出的996(9点上班,9点下班,每周工作6天)的工作模式,一定要断然拒绝,因为我们希望的是可持续的高效开发,而不是短期的突击。当然这不包括上线前可能会需要加班的突发情况。

  6. 为了更好的合作,我们还需要在项目开始就和所有团队成员达成一致,定义内部工作流程,以及内部沟通机制,例如微信群和邮件群等。不过需要注意使用的频率,减少客户对于进度的担忧。此外对于和客户的沟通,我们也需要制定沟通计划,例如如何与客户确定需求等内容,什么时候给客户进行演示,以及客户有问题该怎么和团队沟通,以保证既能解决问题,又不会影响团队交付进度。

  7. 我们合作的很多客户对于文档都是有不少要求的,所以在开发过程中,由于交付压力,客户有可能允许我们延期提交相应的文档,例如设计文档等,但是我们还需要保留好原始材料,等客户需要这些文档时,我们可以通过这些材料快速生成相应的内容,而不是重新做起。

###对于PM来说还有哪些职责呢?

  1. PM还需要适当屏蔽干扰,让团队能集中精力工作,例如尽量减少不相关会议的打扰,以及回答客户随时提问技术细节等情况;同时需要避免压力过多传递到团队成员,例如客户对于需求或者架构的大改变,从而有可能对项目交付进度产生很大压力,PM需要先过滤这些信息,等改变明确后再告知整个团队,这样可以避免传递不确定的信息,避免造成军心不稳的情况。

  2. PM还需要关注团队成员的精神状态,用积极的方式影响团队,提高团队士气;同时还要负责好后勤工作,例如开发所需要的各种资源的协调,办公用品等,以及Team Building这些活动。

  3. 交付团队还需要不时给客户一定的震撼和影响,以使客户对我们更信任。例如采用客户并没有用过的利用画图的方式展示架构图;

用画图的形式展示架构图

图 1.5 - 用画图的形式展示架构图

以及,可以用美纹胶带来装饰物力墙,而不是在白板上手动画上歪歪扭扭的线;

用美纹胶带来装饰物力墙

图 1.6 - 用美纹胶带来装饰物力墙

也可以是针对技术问题的在白板上边画边讲的讨论等。

  1. 如果在客户现场工作,客户一般会希望通过学习我们的工作方式提高自己员工的能力,所以我们也可以考虑在不影响进度的情况下,传播一些项目实践,例如举行公开的回顾会议等。

  2. 在客户现场工作需要注意客户的网络速度和质量是否会影响到我们的工作效率。如果发现了问题,一定要尽早解决,尤其是安装开发和生产环境的任务对于网络要求会很高。

虽然现在我经历的第一个国内交付项目还正在如火如荼地进行,但是并不妨碍我畅想一下交付之后的情形。

#交付之后

在项目完成以后,我们还会维护一段时间。这段时间只会就现有的代码问题进行修复,并不会进行功能开发。

对于PM来说,

  1. PM需要发送上线邮件,让团队成员有成就感和荣誉感,也让更多人知道自己项目是做什么的。

  2. PM还需要负责项目总结,不仅总结成功的经验,还需要分析出现的问题以及如何避免,让更多的人少走弯路。

  3. 最后少不了的就是团队的庆祝了,无论是买蛋糕庆祝还是别的活动,团队都需要被激励和感谢。

之后,大家又可以准备投入到新的项目中啦:)。

以上的总结更多的是从个人的经验出发,难免会有不足,希望这边文章能使大家在看的时候想到自己遗漏的一些内容,更好更完善地进行项目管理。也欢迎大家的指正。