在ThoughtWroks内部,我们如何度量交付项目

度量从来就不是件容易的事,因为我们想通过度量得到的并不仅仅是数据和表格,而是激励团队追求更高目标的动力。

软件项目的度量通常会分为两种,一种是针对产品的度量,一种是针对项目的度量。

之所以这么区分是因为产品的度量在不同的公司之间通常是一致的,不管它们是否使用敏捷开发模式。产品的度量一般包含以下内容:

  • 线上Bug数和严重等级

  • 用户评分

  • 用户粘性:例如跳出率(Bounce Rate),留存率(Retention Rate),转化率(Conversion Rate),单次使用时长(Duration),使用间隔(Interval)等;

  • 用户数量:例如新增用户数(New Users),活跃用户数(Active Users),升级用户数(Updated Users)等;

  • 访问量:例如页面浏览量(Page View),独立访问者(Unique Visitor),访问数(Visit)等;

  • 利润:例如每用户平均收益(Average Revenue Per User, ARPU),每付费用户平均收益(Average Revenue Per Paid User, ARPPU),月付费率(Monthly Payment Ratio, MPR),生命周期价值(Life Time Value, LTV)等。

这些度量都是数据驱动的,因此也更客观。

项目的度量就并非如此了。而且项目的度量也包含了两方面的度量:

  1. 我们如何评价项目组?
  2. 我们如何评价独立的项目成员?

传统上我们会使用诸如KPI和ROI(投入产出比)来度量项目组,用KPI以及管理者的意见来度量个人的贡献。

在敏捷的环境中,尤其是像ThoughtWorks这样没有自己产品而是为客户提供专业服务的公司里,对产品和对项目这两方面的度量就融合在一起了。

  • 首先,度量项目的指标被合并进我们如何度量项目中,而不是像在产品公司中,这两者是分离的。也就是说,如果产品的度量达到或者超出客户的期望,团队就能在客户满意度这一方面的度量中表现突出。因此,项目组受此驱动,自然会持续关注产品的各项指标,并尽力把这些指标集成进CI流水线和诸如NewRelic的各种监测和分析平台中。

    同时,项目组也会聚焦于向客户交付(对于终端用户,对于客户和客户公司的)价值,以便提高客户满意度。当要开发某一特性时,项目团队总是不急于着手实现功能,而是试图找出特性需要解决的问题和根因,特性完成时交付的各种价值,再集思广益寻求更好的解决办法从而使客户的ROI更高。这种方法促使每个团队成员在日常工作中自己思考自己做的事情给产品和团队带来的价值,更有效地选择和处理高价值和高优先级的事情,而不是只关注于手里是否有活,自己是否显得很忙而不考虑产品。

  • 其次,客户关系对于度量项目也至关重要。这并不是说ThoughtWorks会不假思索地接受所有的客户请求,而是根据我们的经验和专业性给出客户最好的解决方案,哪怕这可能会和客户因为理念/想法不一致而产生冲突。因为我们相信,本着对客户负责的态度和客户进行沟通,协调和平衡短期效益和长期受益从而达成共识要比唯唯诺诺更能给客户带来价值。

  • 再次,ROI和利润仍然是评价项目时重要的一环。每个公司最基本的需求就是生存,而‘合理的’利润才能保持公司的问题发展。为什么我们在这里要提‘合理的’利润而不是越高越好呢?因为追求高利润会导致在别的方面(下面会提到)做出妥协,所以为了更方面的平衡,我们只会追求‘合理的’利润。

  • 然后,对客户的影响力也是度量关注的一个要点。多数情况下,之所以客户选择ThoughtWorks并不单纯想让我们交付一个产品或者功能,更重要的是希望我们能通过协作开发来提升客户团队的能力,引入新的思维、技术趋势和技能。为了达到这个目标,项目团队通常会定期举行各种分享活动,并通过结对和只是分享等形式影响更多人。因此,对影响力的度量也被当作是对这些活动的反馈而被包含近项目度量中。

  • 此外,每个团队成员的满意程度也是项目度量的核心标准之一。纵然让项目组种每个人都开心耗时耗力,可是只有这么做才能让每个人都关注于技能的提升,做有挑战的事情,从而对产品贡献更多,工作也会更有效率。ThoughtWorks相信只有身心健康的团队和团队成员才能更高效和充满活力地完成各种充满挑战的项目。

  • 还有,提升团队成员能力不仅是每个人的责任,也是项目组和ThoughtWorks关注的,因此也是项目度量的一个方面。只有当团队成员的能力越强,经验越多,才更有可能独立完成更有挑战的任务和项目,才会对客户和行业产生更深更广的影响。

  • 最后,在项目度量中还会考虑通过产品和在项目本身的开发过程中,我们是否让世界变得更好了。ThoughtWorks希望通过我们在技术上的努力,让世界变得更平等更和谐,所以很多时候,为了追求这些美好的愿望,ThoughtWorks会舍弃那些更赚钱的项目而投身于公益类的项目中。

总结一下,ThoughtWorks会从以下7个方面来度量一个交付项目:

  1. 客户满意度和交付的价值
  2. 客户关系
  3. 利润
  4. 对客户的影响
  5. 团队成员满意度
  6. 团队成员的能力提升
  7. 社会公正

如果你熟悉ThoughtWorks的3个支柱(支柱1:可持续的业务;支柱2:软件卓越;支柱3:社会和经济公正)的业务模式,你就会发现对于项目的度量恰恰反应了这种业务模式。上述度量中的1~3体现了支柱1:可持续的业务;度量中的4~6体现了支柱2:软件卓越;而度量中的7体现了支柱3:社会和经济公正。

在我看来,正如康威定律中所说“任何设计系统的组织,必然会产生以下设计结果:即其结构就是该组织沟通结构的写照。简单来说: 产品必然是其组织沟通结构的缩影。”,对项目的度量也必然是其商业模式和企业文化的缩影

你的公司是如何来进行度量的呢?

What I learnt from interviews

Interview is a two-edge sword to an interviewer:

  • if you attend too many times, it would be a burden as it will take a lot of time;

  • if you don’t take part in it, the company will lack of adequate qualified employee, thus you will work to death, but no one can help you out.

In this article, I will not talk about the technics we can use in the interview to find out a qualified co-worker, but something I learnt from the interviews which helps me to change my mindset and help me to use the time for interview wisely.

Many people are pleased when they are asked to join an interview as an interviewer for the first time, because they treat interview as a privilege of senior employee, while after some time, they think interview is just a waste of time since they just repeat themselves regardless of the background of the candidate.

Thus, the first thing we need to change in an interview is our mindset: we need to understand the benefits an interview can bring to us and to the company.

  • Find the qualified co-worker is the most obvious one.

    It seems only the company would benefit from this, but if the candidate is supposed to join YOUR team, the interview becomes more important to you. You should know there is no solid team in which no one can leave, so it’s important to help yourself evaluate whether the candidate is the one you want to work with.

  • Learn from the others can also happen in the interview.

    It’s not only learning from your peer interviewer about their interview skills, their style and even their knowledge, but also from the candidate.

    By definition, “An interview is a conversation where questions are asked and answers are given … between an interviewer and an interviewee.”

    As we can see, interview is not a one-way conversation but a two-way communication, which requires both parties can share their understanding and experience around certain topic, rather than just letting the candidate ask questions at the end of the interview.

    We all understand everyone is unique and has their own experience and knowledge, at the meanwhile, no one is perfect. Hence, we need to learn from everyone. The person who join the interview as an interviewee doesn’t mean they are inferior than the interviewer; so if you are a passionate learner, you won’t miss this opportunity. Sometimes, if you ask the candidate the real problem in your current project, they may help you to find a better solution!

  • Like ‘Learning by Teaching’, learning by interviewing.

    In the preparation, you need to consolidate your knowledge for the questions you are going to ask; you also need to prepare for the questions the candidate is going to ask.

    In the interview, you can learn from both your pair interviewer and the candidate.

    After the interview, you can learn from the retro by yourself or with your pair, or from the feedbacks from your pair and even the candidate.

  • You can also practise and excel your skills in the interview.

    There are lots of soft skills you can practise and improve: such as facilitation skills, communication skill and even summarization skill. The more you practise, the more expertise you will have.

  • Interview is also a propaganda.

    In the interview, you will demonstrate the culture of the company and the way of working. You are the window through which the candidate can know the company.

As above, interview is not a waste of time, but a medium for us to learn and improve at the mean time.

If we don’t think it’s a waste any more, how we can make it more efficient for our goals?

  • Personally speaking, I prepared my set of questions for the interview, so that I won’t take a lot effort in the questions themselves but focused on the candidates and their experience to dig out questions in depth, which can lead the answers more detailed, meaningful and knowledgeable.

  • Also, before interview, I will go through the candidate’s resume to mark all the key points, such as key roles, changes and something may lead to an in-depth question, and also write down my assumptions, so that I can validate in the interview.

  • In the interview, I will put myself into the candidate’s situation in their narratives, think about my own actions, compare them to the candidate’s and then figure the cause of difference and learn how to improve after the interview.

  • After interview, I will plan my actions to improve according to the summary I written down in the interview. I will also ask my pair to give me feedback.

Interview can be anything you want, according to how you see it. Don’t be a slave of interview, be the master of it!

How to measure the success of a delivery project in ThoughtWorks

Measurement is never been an easy job. It’s not just about the metrics but measurement also acts as a motivation for the team to achieve a better goal.

There are two kinds of measurements commonly used: the measurement of a product and a project.

You may wonder why we separate it like this. It is because the measurement of a product is quite common in most companies, whether the product was developed in an Agile way or traditional way. Usually it includes following contents:

  • Bugs found in production (with severity);

  • Customer Rating (if applicable);

  • Customer Engagement: e.g. bounce rate, retention rate, conversion rate, duration, interval;

  • Number of Users: e.g. active users, new users and updated users;

  • Visits: e.g. page view/PV, unique visitor/UV, total visits;

  • Revenue: e.g. average revenue per user/ARPU, average revenue per paid user/ARPPU, monthly payment ratio/MPR, lifetime value/LTV.

All those measurements are data driven, thus they are objective.

But when it comes to measurement of a project, things are different. The measurement of a project combines two aspects:

  1. how do we review the project team?

  2. how do we review each individual team member?

Traditionally, we use Key Performance Indicators (KPI) or Return on investment (ROI) to evaluate the team, using KPI and manager’s opinion to evaluate team members.

In an Agile world, especially in a company like ThoughtWorks, which doesn’t have a product itself but delivers value to clients through product delivery, the measurements are combined.

  • Firstly, the product success metrics are aggregated into how we measure the project; unlike the traditional way, the measurement of a product and the project to deliver it are separated. That is to say, if the measurement of the product met or went beyond the client’s expectation, the team will get a high client satisfaction score. Hence, the project team is continuously monitoring product metrics , mostly by integrating them into CI pipelines and analytical platform such as New Relic.

    In the meantime, the project team is also focused on delivering value to customers (end-user, stakeholders and the client’s company), which can also improve the client’s satisfaction. When implementing a feature, the team will always figure out the hidden problem to be solved and the value delivered when it’s done and then try to brainstorm all the possible solutions before actually working on it. This approach forces everyone to think about the value they bring to product and team in daily work, rather than just piling up an endless backlog and try to make themselves busy but never really think about the product.

  • Secondly, as part of the measurements, the client’s relationship is also taken into consideration. In this aspect, ThoughtWorks expects the team to provide the best solutions based on the experience and expertise, even if this does not align with what the client originally asked for. This benefits the relationship in the long run and from a big picture because the client understands that we are focused on the things that bring them value.

  • Thirdly, ROI or margin also plays an important role in the review. Every company’s primary objective is to survive, so only ‘Reasonable’ margin can help the company to grow sustainably. Why does ThoughtWorks think that the margin should be ‘Reasonable’ rather than as much as possible? That’s due to the rest of aspects we need to think about in the review listed below and then do a trade-off.

  • Fourthly, the influence on client is also considered to be an important aspect in the review. In most cases, ThoughtWorks is not only considered as a vendor to just deliver a product or a piece of work, but more importantly – leveraging the client’s team, bringing new thinking and technical expertise as well. To meet this expectation, usually the team conducts brown bag sessions, pair programming and knowledge sharing to influence more. As a result, measurement of influence is served as feedback to those actions.

  • Fifthly, the satisfaction of each team member is also a critical measurement among all the measurements. It takes great efforts to make everyone in the team happy, since everyone is eager to improve their own skills, to work on challenging tasks, to contribute more to the product, to work more efficiently, and, of course to not have to work overtime. This is why ThoughtWorks values health of both project and team members and believes it can bring more efficiency and energy to the team.

  • Sixthly, the leveraging of team members is important not just from team member’s personal perspective, but also as a company like ThoughtWorks. The more experience and expertise the employee has, the more challenge situation and project they can handle, thus the more influence on client and industry the company can have.

  • Last but not the least, how we change the world is also largely considered in the measurements. ThoughtWorks always gives the products and projects which can have more social and economic justice influence a high priority, even for sometimes, it means less profit.

In summary, ThoughtWorks assesses a delivery projects from 7 aspects:

  1. Client’s satisfaction and value delivered
  2. Client Relationship
  3. Margin
  4. Influence on Client
  5. Team members’ satisfaction
  6. Leveraging of team
  7. Social and Economic Justice

If you are familiar with ThoughtWorks’ three pillars Business Model (Pillar 1: Sustainable Business; Pillar 2: Software Excellence; Pillar 3: Social and economic justice), you will find the measurements just reflect the business models. The measurements 1 to 3 are aligned with Pillar 1, 4 to 6 are aligned with pillar 2 and 7 is aligned with Pillar 3.

To my experience, just like in Conway’s law:” organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”; an organization is constrained to produce design the measurements of a project which are copies of the business model and culture of that organization.

What are the measurements in your company?

Quick start for InSpec and ServerSpec also comparison of them

InSpec and ServerSpec are Infrastructure Testing tools based on Ruby. InSpec is newly added into ThoughtWorks Tech Radar.

Prerequisite:

RVM, Ruby (>2.2) and rubygems should be installed.

Get Started for InSpec:

  1. Install InSpec.

    gem install inspec

  2. Write and run the InSpec script according to its API document.

    You can reference the script here.

    Using inspec exec inspec.rb to run and check the result.

    As you can see the script is pretty much the same as the one we used for ServerSpec, you can reference that script here. The instruction of running it is in this article.

    And there is another official article talking about the migration from ServerSpec to InSpec (we can also see the differences of resources between the two).

  3. To generate a json file as the test result, we run inspec exec sample_inspec.rb --format json >report.

    And it will generate the test result named “report” every time the test runs.

Comparison between InSpec and ServerSpec:

  • InSpec has 98 types of resources but ServerSpec has only 41.

  • InSpec has more comprehensive documents, you can reference here, and it even has a series of detailed tutorials.

In general, I would suggest to use InSpec for Infrastructure Testing in new projects.

Quick start for UI tests with XCTest

XCTest is introduced by XCode 5 for iOS 7, but until XCode 8 with iOS 10, XCTest became the default and the only choice if you want to conduct UI testing, the previous UIAutomation is deprecated at that time.

Prerequisite:

  • XCode and XCode command line tools should be installed.

  • Download the sample XCOde project from here, unzip and open the Swift project.

Get Started for XCTest:

  1. Add a new Target of iOS UI Testing through “File”->”New”->”Target…”->”iOS UI Testing Bundle”, and input all the required info.

  2. Use the red dot (named “Record UI Test”) on the bottom to start recording, once finished, click it again to complete.

  3. Add assertions to the ends of each scenarios, such as the script in the “UIKitCatalogUITests.swift” of XCTestSample.zip.

    For the detailed usage of assertions, you can reference to the “Test Assertions” in the XCTest Topics.

  4. It’s hard to just check the text on some labels, so if we want to check it, we need to add value to identifier of the label, then in the testing script, using following lines:

    1
    2
    3
    let label=app.staticTexts["label_identifier"]
    XCTAssertEqual(label.label,"label_text")
    XCTAssert(app.staticTexts["label_text"].exists)
  5. If you want to know more usage of the XCTest, you can reference to the Cheat Sheet.

Configure ServerSpec to run tests against multiple hosts

We have learnt how to setup tests with ServerSpec in Quick start for ServerSpec and Testinfra, also comparison of them, but the folder structure can only support testing against one host. In the real world, we need to reuse the tests to test against multiple hosts, so we will see how we can do that.

Prerequisite:

ServerSpec setup is completed according to the steps in Quick start for ServerSpec and Testinfra, also comparison of them.

Configure ServerSpec to run tests against multiple hosts:

Actually we only need to change the Rakefile in the test folder. And its content should be changed to following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
require 'rake'
require 'rspec/core/rake_task'
hosts = %w(
host1
host2
)
task :spec => 'spec:all'
namespace :spec do
task :all => hosts.map {|h| 'spec:' + h }
hosts.each do |host|
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(host) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/{host_server}/*_spec.rb"
end
end
end

Please note, host1 and host2 are the hosts we would like to run our tests against, the reason we put host_server in above lines is because we have the folder named “host_server” in our test scripts structure.

You can reference the script here.

Integrate ServerSpec, Testinfra and ZAP with CentOS 7 Minimal

We already know how to setup tests with ServerSpec and Testinfra in Quick start for ServerSpec and Testinfra, also comparison of them and Quick start for integrating ZAP into CI. Now, let’s see how to integrate them with CentOS 7 Minimal.

Prerequisite:

  1. Install CentOS 7 Minimal successfully with root user setup and login.

  2. Enable network on CentOS 7 Minimal following this article.

    The detailed steps are:

    1) Open Network Manager through nmtui;

    2) Choose “Edit connection” and press Enter (Use TAB button for choosing options);

    3) Choose your network interfaces and click “Edit”;

    4) Choose “Automatic” in “IPv4 CONFIGURATION” and check “Automatically connect” checkbox, then press “OK” to quit from Network Manager;

    5) Reset network services through service network restart.

  3. Adjust the screen resolution following this article.

    The detailed steps are:

    1) Edit GRUB file: vi /etc/default/grub

    2) Append vga=792 to the line GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet,

    and you will have GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet vga=792;

    Check for the detailed GRUB VGA Modes, or scroll down to the bottom of this article.

    3) Add this change to GRUB configuration: grub2-mkconfig -o /boot/grub2/grub.cfg

    4) Reboot to take effect: reboot.

  4. Add a Yum source and update.

    1
    2
    yum install epel-release
    yum -y update
  5. Configure ssh key to connect server easily.

    • Generate ssh key: ssh-keygen -t rsa
    • Copy ssh public key to server: ssh-copy-id -i ~/.ssh/id_rsa.pub username@server
    • Add following lines to ~/.ssh/config.

      1
      2
      Host server
      User username

Quick start for ServerSpec:

  1. Install Ruby.

    yum install ruby

  2. Install RubyGems.

    yum install rubygems

  3. Install Rake.

    gem install rake

  4. Install ServerSpec.

    gem install serverspec

  5. Initial ServerSpec folder with basic settings. Please note the server will be set in this step.

    serverspec-init

  6. Write and run the ServerSpec script according to its API document.

    You can reference the script here.

    Using rake spec under the test folder to run and check the result.

  7. To run specific test rather than the entire test suite.

    Using rake spec spec/host_server/sample_spec.rb under the test folder to run and check the result.

Quick Start for Testinfra:

  1. Python 2.7 is installed by default, so we just need to install Pip.

    -y install python-pip
    1
    pip install --upgrade pip
  2. Install Testinfra and Paramiko.

    pip install testinfra
    pip install paramiko

  3. Write and run the Testinfra script according to its API document.

    You can reference the script here.

    Using testinfra testinfra_test.py to run and check the result.

  4. Some useful arguments we can use to make the test result more clear.

    Instead of using testinfra testinfra_test.py directly, we can add some arguments, such as -q, -s, --disable-warnings and --junit-xml.

    • The argument -q will run Testinfra in quiet mode, with less info exposed
    • The argument -s will let Testinfra capture No pre-test info
    • The argument --disable-warnings will disable warnings during Testinfra runs
    • The argument --junit-xml will export Testinfra test result into a xml file

    After adding those arguments, the command should be look like testinfra -q -s --disable-warnings testinfra_test.py --junit-xml=report.xml

  5. Now we can run Testinfra against the server using testinfra -q -s --disable-warnings --ssh-config=/Path/to/ssh/config --hosts=server testinfra_test.py --junit-xml=report.xml.

Quick Start for ZAP:

  1. Install JDK.

    yum install java-1.8.0-openjdk*

  2. Download ZAP installation script.

    wget https://github.com/zaproxy/zaproxy/releases/download/2.6.0/ZAP_2_6_0_unix.sh

  3. Change permission of the installation script and execute it.

    1
    2
    chmod 777 ZAP_2_6_0_unix.sh
    ./ZAP_2_6_0_unix.sh
  4. Install required libraries.

    1) Install Selenium-WebDriver

    gem install selenium-webdriver

    2) Install IO

    gem install io

    3) Install Rest-Client

    1
    2
    yum install gcc-c++
    gem install rest-client

    4) Install RSpec

    gem install rspec

    5) Install and configure the headless Firefox

    1
    2
    3
    4
    yum -y install firefox Xvfb libXfont Xorg
    yum -y groupinstall "X Window System" "Desktop" "Fonts" "General Purpose Desktop"
    Xvfb :99 -ac -screen 0 1280x1024x24 &
    export DISPLAY=:99

    6) Download and setup geckodriver.

    1
    2
    3
    wget https://github.com/mozilla/geckodriver/releases/download/v0.18.0/geckodriver-v0.18.0-linux64.tar.gz
    tar -xvzf geckodriver-v0.18.0-linux64.tar.gz
    mv geckodriver /usr/lib64

    7) Add following lines to ~/.bash_profile.

    $PATH=$PATH:/usr/lib64

    And run source ~/.bash_profile.

    8) Alternatively, we can use Chromedriver:

    i) Create a file called /etc/yum.repos.d/google-chrome.repo and add the following lines of code to it.

    1
    2
    3
    4
    5
    6
    [google-chrome]
    name=google-chrome
    baseurl=http://dl.google.com/linux/chrome/rpm/stable/$basearch
    enabled=1
    gpgcheck=1
    gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

    ii) Check whether the latest version available from the Google�s own repository using yum info google-chrome-stable

    iii) Update yum using ‘yum update’

    ix) Install Chrome using yum install google-chrome-stable unzip

    x) Download Chromedriver using wget https://chromedriver.storage.googleapis.com/2.32/chromedriver_linux64.zip

    xi) Unzip Chromedriver using unzip chromedriver_linux64.zip

    xii) Move Chromedriver to a place in $PATH using mv chromedriver bin/

  5. Using ruby add_assertions_to_check_zap_result.rb to run and check the result.

    You can reference the script here.

GRUB VGA Modes

Colour Depth 640x480 800x600 1024x768 1280x1024 1400x1050 1600x1200
8 (256) 769 771 773 775
15 (32K) 784 787 790 793
16 (65K) 785 788 791 794 834 884
24 (16M) 786 789 792 795

Quick start for integrating ZAP into CI

OWASP ZAP is widely used security testing tool and it’s open sourced. Let’s see how we can integrate it with our automated functionality testing in CI.

Prerequisite:

RVM, Ruby and rubygems are installed.

Get Started:

  1. Download ZAP from OWASP_Zed_Attack_Proxy_Project and install it.

  2. Install Selenium-WebDriver, IO, Rest-Client and RSpec gems:

    1) Install Selenium-WebDriver

    gem install selenium-webdriver

    2) Install IO

    gem install io

    3) Install Rest-Client

    gem install rest-client

    4) Install RSpec

    gem install rspec

  3. Write and run a basic Selenium-WebDriver script using Ruby. The script is the same as in my previous Gitbook BDD with PageObject.

    The script will be like following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    require 'selenium-webdriver'
    driver = Selenium::WebDriver.for :firefox
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    driver.quit

    You can reference the script here.

    Using ruby simple_script.rb to run and check the result.

  4. Add steps to start ZAP and proxy the testing script using ZAP.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    require 'selenium-webdriver'
    + require 'io/console'
    + system("pkill java") #To close any existing ZAP instance.
    + system("pkill firefox") #To close any existing Firefox instance.
    + IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    + p "OWASP ZAP launch completed"
    + sleep 5 #To let ZAP start completely
    + profile = Selenium::WebDriver::Firefox::Profile.new
    + proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    + profile.proxy = proxy
    + options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    + driver = Selenium::WebDriver.for :firefox, options: options
    - driver = Selenium::WebDriver.for :firefox
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    driver.quit

    You can reference the script here.

    Using ruby add_zap_start.rb to run and check the result.

  5. Read the test results from ZAP.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    require 'selenium-webdriver'
    require 'io/console'
    + require 'rest-client'
    system("pkill java") #To close any existing ZAP instance.
    system("pkill firefox") #To close any existing Firefox instance.
    IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    p "OWASP ZAP launch completed"
    sleep 5 #To let ZAP start completely
    profile = Selenium::WebDriver::Firefox::Profile.new
    proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    profile.proxy = proxy
    options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    driver = Selenium::WebDriver.for :firefox, options: options
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    + JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts" #To trigger ZAP to raise alerts if any
    + sleep 5 #Give ZAP some time to process
    + response = JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts", params: { zapapiformat: 'JSON', baseurl: "http://clients1.google.com", start: 1 } #Get the alerts ZAP found, please note the baseurl will exact match from the beginning
    driver.quit
    + RestClient.get "http://localhost:8080/JSON/core/action/shutdown" #Close ZAP instance

    You can reference the script here.

    Using ruby read_zap_result.rb to run and check the result.

  6. Set up assertions to check the Low Risks.

    The script will become as following:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    require 'selenium-webdriver'
    require 'io/console'
    require 'rest-client'
    + require 'rspec/expectations'
    + include RSpec::Matchers
    system("pkill java") #To close any existing ZAP instance.
    system("pkill firefox") #To close any existing Firefox instance.
    IO.popen("/Applications/ZAP\\ 2.6.0.app/Contents/Java/zap.sh -daemon -config api.disablekey=true") #The path here should be the zap.sh path under ZAP package/folder on your machine; with the option -config api.disablekey=true, ZAP will not check the apikey, which is enable by default after ZAP 2.6.0
    p "OWASP ZAP launch completed"
    sleep 5 #To let ZAP start completely
    profile = Selenium::WebDriver::Firefox::Profile.new
    proxy = Selenium::WebDriver::Proxy.new(http: "localhost:8080") #Normally ZAP will listening at port 8080, if not, please change it to the actual port ZAP is listening
    profile.proxy = proxy
    options = Selenium::WebDriver::Firefox::Options.new(profile: profile)
    driver = Selenium::WebDriver.for :firefox, options: options
    driver.get "http://www.google.com"
    element = driver.find_element :name => "q"
    element.send_keys "Cheese!"
    element.submit
    p "Page title is #{driver.title}"
    wait = Selenium::WebDriver::Wait.new(:timeout => 10)
    wait.until { driver.title.downcase.start_with? "cheese!" }
    p "Page title is #{driver.title}"
    JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts" #To trigger ZAP to raise alerts if any
    sleep 5 #Give ZAP some time to process
    response = JSON.parse RestClient.get "http://localhost:8080/json/core/view/alerts", params: { zapapiformat: 'JSON', baseurl: "http://clients1.google.com", start: 1 } #Get the alerts ZAP found
    + response['alerts'].each {|x| p "#{x['alert']} risk level: #{x['risk']}"} #Extract the risks found
    + events = response['alerts']
    + low_count = events.select{|x| x['risk'] == 'Low'}.size #Count the Low Risks
    + expect(low_count).to equal(1) #Expecxt only one Low Risk
    driver.quit
    RestClient.get "http://localhost:8080/JSON/core/action/shutdown" #Close ZAP instance

    You can reference the script here.

    Using ruby add_assertions_to_check_zap_result.rb to run and check the result.

  7. Now we can trigger this script in any CI using command above.

Quick start for ServerSpec and Testinfra, also comparison of them

ServerSpec is to check servers are configured correctly through their actual state using RSpec tests. Testinfra is kind of a Serverspec equivalent in Python and based on Pytest.

Prerequisite:

  1. RVM, Ruby and rubygems are installed.

  2. Python and pip are installed.

  3. We would expect to run Testinfra and ServerSpec against the server rather than our local machine, so we may need to tweak a little bit by connecting the server with ssh key.

    • Generate ssh key: ssh-keygen -t rsa
    • Copy ssh public key to server: ssh-copy-id -i ~/.ssh/id_rsa.pub username@server
    • Add following lines to ~/.ssh/config.

      1
      2
      Host server
      User username

Get Started for ServerSpec:

  1. Install ServerSpec.

    gem install serverspec

  2. Initial ServerSpec folder with basic settings. Please note the server will be set in this step.

    serverspec-init

  3. Write and run the ServerSpec script according to its API document.

    You can reference the script here.

    Using rake spec under the test folder to run and check the result.

  4. To run specific test rather than the entire test suite.

    Using rake spec spec/host_server/sample_spec.rb under the test folder to run and check the result.

  5. To generate a html file as the test result, we can add t.rspec_opts = '--format html --out reports/rspec_results.html' in the Rakefile when new the Rake task. e.g.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    require 'rake'
    require 'rspec/core/rake_task'
    task :spec => 'spec:all'
    task :default => :spec
    namespace :spec do
    targets = []
    Dir.glob('./spec/*').each do |dir|
    next unless File.directory?(dir)
    target = File.basename(dir)
    target = "_#{target}" if target == "default"
    targets << target
    end
    task :all => targets
    task :default => :all
    targets.each do |target|
    original_target = target == "_default" ? target[1..-1] : target
    desc "Run serverspec tests to #{original_target}"
    RSpec::Core::RakeTask.new(target.to_sym) do |t|
    ENV['TARGET_HOST'] = original_target
    + t.rspec_opts = '--format html --out reports/test_results.html'
    end
    end
    end

    And it will generate the test result named “test_results.html” under “reports” folder every time the test runs.

Get Started for Testinfra:

  1. Install Testinfra and Paramiko.

    pip install testinfra
    pip install paramiko

  2. Write and run the Testinfra script according to its API document.

    You can reference the script here.

    Using testinfra testinfra_test.py to run and check the result.

  3. Some useful arguments we can use to make the test result more clear.

    Instead of using testinfra testinfra_test.py directly, we can add some arguments, such as -q, -s, --disable-warnings and --junit-xml.

    • The argument -q will run Testinfra in quiet mode, with less info exposed
    • The argument -s will let Testinfra capture No pre-test info
    • The argument --disable-warnings will disable warnings during Testinfra runs
    • The argument --junit-xml will export Testinfra test result into a xml file

    After adding those arguments, the command should be look like testinfra -q -s --disable-warnings testinfra_test.py --junit-xml=report.xml

  4. Now we can run Testinfra against the server using testinfra -q -s --disable-warnings --ssh-config=/Path/to/ssh/config --hosts=server testinfra_test.py --junit-xml=report.xml.

Comparison between ServerSpec and Testinfra:

Advantages of ServerSpec
  • More documents and community support (compared to Testinfra);

  • The scripts, test results and reports are more readable (Testinfra is based on Pytest, and can only export report to XML not JSON);

  • Although Testinfra support “sysctl” and it runs successfully on the server, the command in script can’t run, and prompts error “bash: sysctl: command not found”; may also occurs to other commands/resources (potential risk);

  • ServerSpec can support more resources and attributes for resources than Testinfra.

Advantages of Testinfra
  • Can support most common resources, such as Docker, File, Group, Service, Socket and etc (equivalent to ServerSpec);

  • Can show the actual value when validating the permission but failed, so the debug will be quicker.

Conclusion

If you can only choose Python as your programming language, you have to use Testinfra; otherwise, I recommend ServerSpec because of the benefits it can bring.

Quick Start for Docker

Installation

Download from https://www.docker.com/docker-mac

Register a Docker account

Register on https://www.docker.com/

Check Docker availability

Run following commands in Terminal
docker -v
docker -info

Pull a Docker image. e.g. “Ruby”

docker pull ruby

Check the ruby version in Docker image “Ruby”

docker run ruby ruby -v

Run Docker image at background and with an alias, e.g. “trail”

docker run -idt --name=trial ruby

Check all the running Docker images

docker ps

Attach to the running Docker image and execute commands in it

You can use docker attach trial
But it’s better to use docker exec -i -t trial sh

Reference
Docker — 从入门到实践