Tuesday, April 7, 2015

Create a pivot table from another pivot table



Excel doesn’t support creating a pivot table from another pivot table directly, but it is very easy to do so using named range. 

Please refer to my previous blog about how to use named ranges. The difference is that you might want to change the last parameter – the range width – to include more columns in the named range:
=OFFSET(javaChart!$A$4,0,0,COUNTA(javaChart!$A$4:$A$10000)-1,1)

Now, from the menu tab, “Pivot Table”, input the named range:



That is all!

Create a scatter chart from a pivot table



Excel doesn’t allow you to create scatter chart from a pivot table:

To overcome this, you can use the following approach:

  1.  Create a scatter chart using two columns of the pivot table as x, y series
  2.  Create two named ranges referring to the two columns
  3.  Replace the x, y series of the scatter chart with the named range

Create a scatter chart using two columns of the pivot table as x, y series


  1. Menu tab: Insert, choose “scatter” chart
  2.  Right click on the chart, select menu “select data”
  3.  Edit series



You do not need to be precise about X, Y series, just choose a few values from the two columns of the pivot table. later, you will replace these two series with named ranges.

Create two named ranges referring to the two columns

1.       Menu tab: formula, choose “named range”
2.       Create a new named range referring to the first column of the pivot table
Name: javaPivotName
Formula: =OFFSET(javaChart!$A$4,0,0,COUNTA(javaChart!$A$4:$A$10000)-1,1)

3.       Create a second named range use the first range as the reference

Replace the x, y series of the scatter chart with the named range

Click on a dot in the scatter chart, and replace the x, y series with the two named ranges:

  1. Replace “$A$4:$A$12” with the first named range
  2. Replace “$B$4:$B$16” with the second named range


Automate the process in a macro

If you try to automate this in a macro, you will encounter something tricky. To create such a macro, you use record to capture creating a named range, which will generate code such as this:

  ActiveWorkbook.Names.Add Name:="javaPivotName", RefersToR1C1:= _
        "=OFFSET(javaChart!R4C1,0,0,COUNTA(javaChart!R4C1:R10000C1)-1,1)"

Notice, it uses RC style to reference cells.

Being a good developer, you of course do not want to use the absolute row, column number, you change the code to:

ActiveWorkbook.Names.Add Name:=xRangeName, RefersToR1C1:= _
        "=OFFSET($" + xColumn + "$" + xRow + ",0,0,COUNTA($" + xColumn + "$" + xRow + ":$" + xColumn + "$10000)-1,1)"

Then you try to replace x, y series in the scatter chart with named range, here is when you encounter the tricky part: Excel thinks your name range is invalid, if you check the named range, its definition is:
=OFFSET(javaChart!'A4',0,0,COUNTA(javaChart!'A4':'A10000')-1,1)

Notice somehow Excel adds single quotes around cell addresses.

Apparently it is because two ways of referencing cell addresses are mixed together, you should stick to one way. So the solution is to change

ActiveWorkbook.Names.Add Name:=xRangeName, RefersToR1C1:= _
        "=OFFSET($" + xColumn + "$" + xRow + ",0,0,COUNTA($" + xColumn + "$" + xRow + ":$" + xColumn + "$10000)-1,1)"

To:

ActiveWorkbook.Names.Add Name:=xRangeName, RefersTo:= _
        "=OFFSET($" + xColumn + "$" + xRow + ",0,0,COUNTA($" + xColumn + "$" + xRow + ":$" + xColumn + "$10000)-1,1)"

Sunday, March 15, 2015

To automate or not – A cost perspective



Last week, in a meeting, a QA manager proposed that we needed to make up for some automation test cases that were dropped in the last release due to feature changes and difficulty in maintaining them. My boss asked point-blankly, “why? what is the point of automation?” At first, I thought this question was obvious, we all know the benefits of automation: 
  •  Catch regressions
  •  Provide fast feedback, the later a bug is discovered,  the most cost it takes to fix it
But he is my boss, I didn’t want to throw these easy answers to him. Back at my desk, I couldn’t get rid of this seemingly stupid question. When I was a DEV manager, I was guilty of pointing fingers at QA for letting out stupid bugs; now I have a chance to look at both sides, the more I think about it, the more profound it becomes.

Think how a tester typically writes test cases:
  1.  First he does manual tests of a feature
  2. He discovers bugs, prompts DEV to fix them and redo the manual test; then discovers other bugs, and repeats the process
  3. Finally he thinks he has got rid of most bugs, and he starts to write automation tests    
  4. Writing automation tests is just like writing DEV code, he might need write/debug/test several times to get the automation tests ready
  5.  Because the tester is busy writing automation tests, he is not able to invest time to manual test other features.

If we assume the time cost to run a test manually is 1, writing an automation test may be 10 or even more. And running the automation test at this point doesn’t bring any value. Manual test has caught most of the bugs, running the automation test won’t find any new bugs. 

As an extreme case, if all automation tests are written after the system under test (SUT) is stabilized, running all automation tests won’t find any new bugs, all the investment on automation for this release has no benefits at all. 

So when and how does automating test cases redeem itself?

 

Cost of repeatedly running tests

 

Running a test manually 1 time may take little time, but if the test has to be run many times, the time cost adds up; writing an automation test may take 10 or even more time, but running it 100 times doesn’t take any more human time. So the time cost is amortized if the automation test case has to be run many times.

What tests need to be run many times?

  • Smoke tests. These tests need to be run after each build to make sure no serious bugs have been introduced by code changes.
  • Compatibility tests, such as different operation systems, different databases etc. 


With the extreme case that all automation tests are written after the SUT is stabilized, the investment on automation will be redeemed in the next release, acting as regression tests to ensure changes made on this current release do not unintentionally change old behaviors.

Maintenance cost

 

Writing an automation test case may already take 10 times more than running the test manually, maintaining an automation test case may take even more time, in fact, many automation test cases are dropped or reduced to nearly useless because of high maintenance cost. The lift cycle of an automation test case is like this:



If a test becomes useless or dead before it has been run enough times to justify its creation and maintenance cost, the investment on automating this test is not recouped.  If a test case breaks for bad reasons, the cost of fixing it multiple times diminishes its value as well. 

Let us consider why an automation test case breaks.

 

Bad break

 

If a test case breaks because of elements unrelated to the intention of the test case, it is a bad break. GUI automation test cases are often more fragile and suffer from bad breaks. 

For example, a user story is that a user has 50$ in his account, he can type in 100$, click on “withdraw” button, get notified on the screen “Sorry, you can’t withdraw 100$”. The intention of the test case is to test users can’t overdraw their accounts. If the “withdraw” button is changed to “WITHDRAW”, it doesn’t change the intention of the test case, and it shouldn’t fail. If the prompt on the screen is changed to “Sorry, you are not allowed to withdraw 100$” , it usually shouldn’t break the test case either – the SUT should return an ERROR code, which is more stable, and the test case should look for the ERROR code instead of the error message. 

There is another kind of bad break that is related to data. Ideally, every test case should be isolated, and every test case should have a clean slate to start with. For example, a user saves 100$ into his account which originally has 100$, and then he withdraws 50$, he should have 150$ left. If before he withdraws, another test case runs that the same user withdraws 50$, the first test case will fail because the balance will be 100$. This is a simple example, and can be dealt with in many ways, for example, data shouldn’t be persisted to real database, or each test case creates a unique account, or both test cases should run in their own transaction etc. But in reality, the reason to have GUI test cases in the first place is because it is hard (or even impossible) to do modularized tests,  and test cases have to be run against the real database. 

Too many bad breaks will cause teams to lose faith with the automated test cases, cause them to short-cut, for example, by commenting out the parts that often fail, or by commenting out assertion. Thus, although test cases run green, they are becoming simpler and simpler and reduced to useless. Worse still, it gives a false safe feeling to teams as they think the SUT has passed automation tests.

 

Break caused by feature change

 

If the feature under test is changed, the test case of course will break. For example, now the bank allows overdrawing to a limit of 100$, overdrawing 100$ on a 50$ can succeed now.  The test case that previously expected an ERROR code now breaks. 

In reality, feature changes might be very big, it might be more cost-effective to rewrite test case rather than tweaking the old ones. 

This happened with my team. The previous release changed some features greatly, and the old test cases were so hard to maintain, we simply dropped them. 

 

Break because of bugs

 

Finally, this is the reason we hope a test case break! The more such breaks, the more value test cases are. 

We would especially expect such breaks when we are changing some common code that is used in many places. In a tightly coupled system (or in other words, in real-word systems where architecture inevitably decay), a seemingly small change may ripple through the whole system, and nobody can tell exactly what impact is. In this situation, automation test cases act as:

  •  A safety-net to catch regression tests.
  •  Documenting the impact  

Increase cost-effectiveness

 

From the above analysis, it is clear to increase cost-effectiveness, we need to:

  • reduce cost by making writing and maintaining automation easier
  • add value by automating in critical areas. Test cases against these critical areas will be run many times, and guard against regressions

Friday, February 27, 2015

Service Discovery with Zookeeper



The design behind Zookeeper is intriguing. If you do not think so, ask yourself, if you were to design a library that does inter-process locking, what would you do? You probably want to expose features such as lock, unlock right? Not Zookeeper. Instead, it provides file-system like APIs that enables users to develop their own inter-process coordination.

This gives Zookeeper a lot of flexibility.

But using Zookeeper APIs is difficult (e.g. you have to manage connections, manage errors etc), so comes Curator which makes working with Zookeeper much easier. 

Zookeeper and Curator provide recipes for some common inter-process coordination tasks. 

I use Zookeeper in my container farm for service registry and service discovery:


Containers that run ElasticSearch will register themselves to Zookeeper. Containers that run my app also run Logstash, which will not start until it finds ElasticSearch service is registered. Logstash will analyze logs and send logs to ElasticSearch, and upon fatal errors, it will email the admin.  My apps will also register themselves with Zookeeper.

 

Install zookeeper

To try the following code, first download zookeeper from http://www.apache.org/dyn/closer.cgi/zookeeper/, unzip it. Copy data/zoo_sample.cfg to data/zoo.cfg, edit zoo.cfg, and change dataDir to a folder that exists. 

Start zookeeper server:
./zkServer.sh start

And now start zookeeper client, you can use this client to view registered services:
./zkCli.sh

 

Service registering recipe

The following is the service registering application:
public class ServiceRegister {

       @Parameter(names = { "-service" }, description = "service ids")
       private String serviceId;

       @Parameter(names = { "-h" }, description = "help")
       private boolean help;

       @Parameter(names = { "-zkServer" }, description = "the ip address of the zookeeper server")
       private String zkServer;

       @Parameter(names = { "-zkPort" }, description = "the port of the zookeeper server")
       private String zkPort;

       private static volatile boolean keepRunning = true;

       private ServiceDiscovery<Void> disoveryBuilder;

       public static void main(String[] args) throws Exception {
              final ServiceRegister register = new ServiceRegister();
              new JCommander(register, args);

              register.registerService();

              Runtime.getRuntime().addShutdownHook(new Thread() {
                      public void run() {
                             try {
                                    register.unregisterService();
                             } catch (Exception e) {
                                    e.printStackTrace();
                             }
                             keepRunning = false;

                      }
              });

              while (keepRunning) {
                      Thread.sleep(50000000);
              }
       }

       private void registerService() throws Exception {

              CuratorFramework curatorFramework = CuratorFrameworkFactory.newClient(zkServer + ":"+ zkPort, new RetryNTimes(5, 1000));
              curatorFramework.start();

              disoveryBuilder = ServiceDiscoveryBuilder.<Void> builder(Void.class)
                             .basePath("mycontainerfarm").client(curatorFramework).build();

              String[] ids = serviceId.split(",");
              for (String id : ids) {
                      if (Strings.isNullOrEmpty(id)) {
                             continue;
                      }

                      ServiceInstance<Void> serviceInstance = ServiceInstance.<Void> builder().name("app").id(id).build();

                      disoveryBuilder.registerService(serviceInstance);
              }

       }

       private void unregisterService() throws Exception {
              String[] ids = serviceId.split(",");
              for (String id : ids) {
                      if (Strings.isNullOrEmpty(id)) {
                             continue;
                      }

                      ServiceInstance<Void> serviceInstance = ServiceInstance.<Void> builder().name("app")      .id(id).build();

                      disoveryBuilder.unregisterService(serviceInstance);
              }
       }
}
Run this application with arguments:
ServiceRegister -zkServer 192.168.33.16 -zkPort 2181 -service myapp1

Use the zkCli, you will see myapp1 is registered at /mycontainerfarm/app/myapp1:

Unregistering is not necessary, because the node/mycontainerfarm/app/myapp1is an ephemeral one, when ServiceRegister exits, zookeeper server will automatically remove the node after a period of time.  

ServiceRegister unregisters explicitly in its shutdown hook, so that when it exits, the node /mycontainerfarm/app/myapp1 immediately disappears.

 

Inter-process locking recipe

There is one problem with this recipe, the argument -service is the service id, which is unique.  Curator’s logic is that if it finds the same node already exists, it will delete it, and recreate it. 

You can verify this by checking the ctime and mtime of get /mycontainerfarm/app/myapp1. This doesn’t fit with my requirement: I only want to register the same service if it doesn’t exist. 

We can use another recipe: inter-process locking.

for (String id : ids) {
              if (Strings.isNullOrEmpty(id)) {
                      continue;
              }

              InterProcessMutex lock = new InterProcessMutex(curatorFramework,
                             "/mycontainerfarm/myapp" + serviceId);
              boolean acquired = lock.acquire(5, TimeUnit.SECONDS);
              if (acquired) {
                      ServiceInstance<Void> serviceInstance = ServiceInstance.<Void> builder()
                                    .name("app").id(id).build();

                      disoveryBuilder.registerService(serviceInstance);
              } else {
                      System.out.println("fail to register " + serviceId);
       }
}

 

Service discovery recipe

public class ServiceDiscover {
       @Parameter(names = { "-zkServer" }, description = "the ip address of the zookeeper server")
       private String zkServer;

       @Parameter(names = { "-zkPort" }, description = "the port of the zookeeper server")
       private String zkPort;

       public static void main(String[] args) throws Exception {

              final ServiceDiscover discover = new ServiceDiscover();
              new JCommander(discover, args);
              discover.discover();

       }

       private void discover() throws Exception {
              CuratorFramework client = CuratorFrameworkFactory.newClient(zkServer + ":" + zkPort,       new RetryNTimes(5, 1000));
              client.start();

              final ServiceDiscovery<Void> serviceDiscovery = ServiceDiscoveryBuilder
                  .<Void> builder(Void.class).basePath("mycontainerfarm").client(client).build();
              serviceDiscovery.start();

              ServiceProvider<Void> serviceProvider = serviceDiscovery.serviceProviderBuilder()
                      .serviceName("app").build();
              serviceProvider.start();

              List<ServiceInstance<Void>> allInstances = Lists.newArrayList(serviceProvider
                             .getAllInstances());

              for (ServiceInstance<Void> instance : allInstances) {
                      System.out.println(instance.getId());
              }

       }

}