Android Automation - NativeDriver

Library Configuration Tip:

  1. To run the Robotium tests, we need to select all in “Properties -> Java Build Path -> Order and Export”;
  2. Move the robotium up to top and move Android and Android Dependencies just below robotium (but still above any project).


  1. Create the tests as JUnit tests;
  2. Under application folder, run $Android-SDK-Prefix/platform-tools/adb shell am instrument com.calculator/; then run $Android-SDK-Prefix/platform-tools/adb forward tcp:54129 tcp:54129
  3. Connection and port forward are easily broken, need to check them when tests fail, especially once the tests fail or code is changed.
  4. If the element, especially textfield, is not focused, you can’t input anything in it; unless you click() it before sendKeys()
  5. After input into textfield, you need to navigate back to disable keyboard, using driver.navigate().back()
  6. No api about wait(), just use Thread.sleep()
  7. Using the take screenshot method, you can only use full path


  1. NativeDriver
  2. NativeDriver Wiki
  3. NativeDriver Source
  4. NativeDriver Google Groups
  5. TestNG + NativeDriver实现android的UI自动化测试
  6. How to Screen Capture On Error
  7. Windows下NativeDriver截屏功能


iOS Automation - KIF (Keep It Functional)

Please download and unzip, and select “HelloWorld copy” to run KIFTests.


  1. I have modified KIF framework: add “[(UITextField *) view setText:nil];” in KIFTestStep.m to clear text before input;
  2. By default, using “[super initializeScenarios]” in EXTestController.m to initialize all scenarios, the order of scenarios are defined according to the scenario name ascending; to change the order of them, change corresponding lines in “initializeScenarios” in EXTestController.m.
  3. For KIF tests, in order to get value of text field, we need to set “accessibilityValue” of it in code first, then we can access its value in testing code. It’s better we can package them into one class, which we can reuse later.

References for KIF:

  1. KIF uses undocumented Apple APIs. This is true of most iOS testing frameworks, and is safe for testing purposes, but it’s important that KIF does not make it into production code, by duplicating a second target for the KIF-enabled version of the app to test. This gives you an easy way to begin testing – just run this second target – and also helps make sure that no testing code ever makes it into your App Store submission and gets your app rejected.
  2. All of the tests for KIF are written in Objective C, so the performance is good.
  3. Some detailed steps are not documented in KIF official documents, need to compare the docs with code provided in GitHub.

Advantages of KIF comparing to UIAutomation (official by KIF group but briefly):

KIF Pros

  1. All objective-C. No need to write translations from JavaScript.
  2. Can easily hook into your code base for obscure things, like “fake a credit card swipe.”
  3. Easily runs on the command line for CI. Current shipping version of UIAutomation does not.
  4. No external dependencies. UIAutomation currently requires Instruments.
  5. Easy to integrate into your Xcode workflow.
  6. Tests run quickly. Runs in the app rather than being translated from Instruments/JS.
  7. It’s open source. If there’s something you want to add, you can. If something’s broken, it can be fixed quickly.

UIAutomation Pros

  1. Runs outside of the process, so it could potentially simulate things like switching apps or hitting the home button. KIF could possibly do some of this using private API, but in general you need to mock these sorts of interactions.
  2. JS is less verbose and may appeal to some more than ObjC.
  3. Apple has full access to the frameworks and can potentially do some awesome things in the future.
  4. We get new features for free over time, and old ones will always be supported. KIF uses a lot of undocumented API, so it’s possible that future iOS releases would require it to be updated.

Some materials for iOS - KIF framework:

  1. KIF
  2. KIF Google Groups
  3. iOS Integration Testing
  4. Enabling Accessibility Programatically on iOS Devices

iOS Automation - UIAutomation

This framework is not suggested!

Some materials for iOS - UIAutomation framework:

  1. Working with UIAutomation
  2. UI Automation
  3. UI Automation JavaScript Reference for iOS
  4. How Do I Perform UI Automation Testing in iOS 4
  5. iOS - UI Automation get textfield by accessibility label?
  6. iOS Automated Tests with UIAutomation
  7. Can’t get value of UIAStaticText?


  1. Using JavaScript: each line of JavaScript in UIAutomation will trigger a request to iOS simulator, which make the awful performance of it;
  2. The framework is unstable: for the script attached, sometimes the testing results are different, even same environment and script are provided; and sometimes the simulator or instrument will hung and quit expected;
  3. The testing supporting of this framework is not good: the element can be get by its id, but can only get by its name/label occasionally, and there are some issues unresolved, like value of label item can’t be get;
  4. Readiness of JavaScript is hard, it doesn’t provide the capability to integrate with tools like Cucumber;
  5. It doesn’t have the capability to integrate with CI tools;
  6. It can’t run on real devices, but only on simulator, which means it can’t run parallel.


使用Mind Map写Test Case

在之前的项目中,Acceptance Criteria (AC)的描述会存放在Jira Description中;在当前项目中,除了在Jira中描述AC外,我们会使用Mind Map来写AC,以便形成测试用例Test Case,并标注测试结果和注测试代码覆盖,用着效果不错。


Mind Map中的Test Case

Mind Map中的AC虽然没有严格按照标准方式书写Given When Then,但实际上通过每个节点的上下文,我们就能了解到相应的这几方面的内容。

我觉得这种方式写Test Case的优势在于:

  1. 有层次,一目了然
  2. 也看得出AC的关联
  3. 能知道哪些有测试实现
  4. 能看得到测试结果
  5. 任何角色都能看到MindMap,也都能了解相应的Story的状态。

对于Mind Map的书写和分类,目前我们是按照相对独立的Feature来分文件,Page-Feature-StoryID,方便找到需要的Mind Map。对于以后功能和页面过多的时候怎么分类(页面过多文件过多,功能多了后也必然会有不同页面的Feature的关联),还在思考中,逐步变化吧。



  1. BA确定需求后会写出Mind Map的初稿;
  2. QA和BA一起Review并得到最后的AC;
  3. Kick Off Story时会按照Mind Map来过一遍
  4. 在Developer进行开发的时候,会将已实现单元测试的方法名加在对应的AC之后并加上单元测试覆盖的图标;
  5. QA能看到哪些AC被测试覆盖了,并着重未覆盖的AC也着重探索性,测试完成后会在对应的AC加上勾或者叉来标注结果。


  1. 多了一次对Story的描述,最开始是当做Story的补充,因此Mind Map能否及时根据Story更新也成了要关注的问题;
  2. Mind Map是文档但不是可执行的文档。



1. 对于用户输入:

  • 使用实体键盘或者虚拟键盘?
  • 触摸点击输入还是手写输入?针对老年人的产品,比较好的方式是采用的是手写输入,易于让用户适应
  • 如何定位到输入字符串中的某个位置?手指粗,或者屏幕小,很难精确定位到特定位置

2. 对于软件状态的考虑:

  • 程序运行中接电话,是否会造成死机,或者程序锁死?
  • 中断后能否回复以前的状态?
  • 比如播放歌曲时接电话,歌曲能否自动暂停,挂机后能否继续播放?

3. 基于场景考虑:

  • 用户在使用程序时是否会同时使用别的程序?
  • 在用程序得到结果时,是否还需要用到别的程序?
  • 比如计算完财务状况,可能需要发邮件向银行确认错误的账单?


  • 当前Android还处于高速的发展和改变的时期,很多功能甚至API还不完善很稳定,一些工具和类库的接口有时会和SDK的API不一致,例如Robotium


  • 是否允许多个程序进程同时执行?
  • 如果可以,资源:硬件和用户数据是否会死锁,是否会有竞争?
  • 是否允许本程序和其他应用程序一起运行?
  • 如何保证数据的安全性?


  • 对不同大小分辨率的兼容?不止是外观的改动,手机和平板的使用习惯也会有不同。手机大部分是简直是来快速的获取重要的内容,要点要突出明确,而平板大部分时间会用来获取详细的内容和报告;
  • 对不同的系统版本的支持?尤其是Android几个月就对版本进行升级的策略的应对;
  • 程序是否可以升级?
  • 如何保持向下兼容性?


  • 不同与传统软件和系统,Android允许程序调用其他程序暴露、或者说提供的接口,来进行调用,如何处理这些调用,以及这些调用带来的稳定性和安全性问题?


  • Android系统为每个应用分配的内存上限为24M。





  1. QA完成故事测试时鼓掌等对团队正面反馈的一些实践(让团队更注重项目整体的进度并加强责任感),甚至可能包括一些团队建设的活动
  2. 每天的Code Review,最好QA也能参与,让整个团队都能从业务和代码层面有统一、整体的认识
  3. 在故事正式开始开发之前,不仅仅写好测试场景和用例,最好能写明自动化测试的场景,对于回归测试和对故事的认识都有更深入的了解
  4. 发布经理和迭代经理的轮替,能让更多团队成员接触敏捷开发流程上的各方面,对增强团队责任感和对流程的理解有不少帮助
  5. 多了解客户组织架构以及对客户的影响,分析在不同的情况下找哪个最合适的人来解决问题;同时需要多和客户沟通,加强个人的影响力,多方面加强与客户之间的关系
  6. 更好地了解和分析客户的业务模式及需求,不仅仅是从单个故事的角度提出我们的建议,而是从更高的层级提供分析以实现更多的业务价值
  7. 尝试用Ruby配合Cucumber搭建自动化测试环境,因为像使用cuke4duke配合Java这种实现方式,增加了中间插件自身的限制对于测试框架的影响;以及需要单独的测试环境和数据库
  8. 更频繁地和客户进行周期性的反馈(至少一个月一次),以及更频繁的团队内部的一对一的交流和反馈
  9. 合理的分配时间,控制同时参加活动的数量,保持精力的充沛
  10. 加强和客户,以及其他团队的沟通,制定更好的沟通计划(定期,有明确的接口人)
  11. 对于新人,尤其是QA的培训,可以分层次,采用技能矩阵的方法来制定比较详细的计划,想办法让他们保持积极性;可以建议每6个月换一下项目,加强 项目组和办公室之间的联系;对于新人的项目介绍和培训也需要系统化
  12. 加强团队成员的质量意识,开发帮忙测试,交换开发和测试的职责,让团队成员自己使用和测试系统;团队成员共享向客户演示的职责
  13. 增加测试覆盖率的可视化(手动以及自动化测试),并进行分析
  14. 根据模块、功能、实际的每一天进行缺陷等测试的分析
  15. 在对客户演示之后进行反馈的总结,以帮助进行后续分析和开发的指导(迭代演示和故事卡演示)
  16. 不要急于沟通,可以沉淀一下,对问题有清晰的认识以后,避免自己情绪的干扰,再进行沟通;更多地使用电话等即时沟通的方式;多使用图片等直观的形式
  17. 通过不同的形式使故事卡、团队、迭代的进度更可视化
  18. 有急事时通知团队做好相应的处理
  19. 尽量避免打扰其他团队成员的工作
  20. 坐的位置最好能覆盖不同的结对的开发人员,加强相互之间的沟通
  21. 合理调节故事大小(包括技术故事),保证每天都有故事完成,增强团队的成就感