Friday, November 8, 2013

A small cool macro that makes Mind Maps and Spreadsheets better friends

In case, like me, you belong to the Mind Maps lovers’ group, there’s good chance this tool will interest you.
I like Mind Maps because they are easy to create and evolve. They represent data in a way that makes sense to human beings.  
When you want to add a leaf to any branch of the data structure, you don’t need to mess around, as it is just a natural development of the idea representation.

On the other hand, Spreadsheets have their own advantages. They are able to perform calculations on the data, and sort and filter it.

I like to combine both Mind Maps and spreadsheets in my work. I summarize ideas in a Mind Map and move it to an Excel™ spreadsheet in order to use it in a way that involves calculations and filtering.
I use Xmind for creating Mind Maps. Moving the data from the Xmind application to an Excel sheet is very easy: copy-paste the central subject into the sheet. 
However, the data format in the target sheet is not very useful for my goal – each hierarchy is placed in a new column, as you can see in picture #1. 
I would be happier if I were able to have all the data in same column, indented by the hierarchy, as you can see in picture #2. 
It would be even cooler if we were able to use Excel’s “grouping” feature so we could see the exact hierarchic level of the data, see picture#3.
Pic#1
I will not keep secrets from you. I wrote an Excel VBA Macros that provides the wish list above.  Feel free to use it, just copy it into your VBA editor.

Note: while this tool is great for porting the Mind Map data into Excel, once you’ve used it, the data won’t be easily exported back to MindMap. If you find it necessary, you can create a VBA macro that will help to do that.


Pic#3
Pic#2
I created three macros: one that moves the data to one column and indents it according the hierarchy. The other groups the data according to the indentation and the third one which calls both macros.
Before that you run the macro, make sure that the cells that conatins the data are in the "selection" as you can see in the following video:

 If you have  any questions, Tweet me: @testermindset
Enjoy!
The VBA code:

Sub moveIndentGroup()
' This macro call the 2 other Macros in order to perform all actions usingcommand

    LastRow = Selection.Cells.Rows.Count
    Call moveAndIndent
    Call GroupIt(LastRow)
End Sub

Sub moveAndIndent()
 Dim rCell As Range
    Dim rRng As Range
    Set rRng = Selection
    For Each rCell In rRng.Cells
       If ((rCell <> "" Or rCell <> 0)) Then
        Cells(rCell.Row, 1).Value = rCell.Value
        Cells(rCell.Row, 1).IndentLevel = rCell.Column - 1
        If rCell.Column > 1 Then rCell.Value = ""
       End If
        Cells(rCell.Row, 1).HorizontalAlignment = xlLeft
    Next rCell
End Sub

Sub GroupIt(Optional LastRow)
    If IsMissing(LastRow) Then LastRow = Selection.Cells.Rows.Count
    For j = 1 To 5
        For i = 1 To LastRow
            If Cells(i, 1).IndentLevel = j Then
                FirstCell = i
                LastCell = i
                While (Cells(FirstCell, 1).IndentLevel <= Cells(LastCell + 1, 1).IndentLevel) And LastCell <= LastRow
                    LastCell = LastCell + 1
                Wend
                Range(Cells(FirstCell, 1), Cells(LastCell, 1)).Select
                Selection.Rows.Group
                i = LastCell + 1
            End If
        Next i
    Next j

End Sub

Sunday, November 3, 2013

Dealing with Stress take 2

In case that you are reading this post, there is a chance that you read my blog post about Stress.
This post was transformed twice: first, I turned it into an article in Tea Time with testers magazine, than it became a presentation in QA&Test 2013 conference at Bilbao, Spain.

The conference was awesome. Hospitality was great. I had an opportunity to get to meet and share ideas
with many cool testers from all over the world. I was also able to experience presenting at an International conference.

Besides the format change – from a blog post to an article and then into a presentation, the ideas themselves emerged and developed as I got a lot of feedbacks while working on the material.

The main idea behind the work is the need to connect our Stress tests to the user’s needs . To do that, I suggest categorizing our Stress tests and failures into 3 main categories: Multiple experiments, Stability over time and Load.
While working on the presentation I added a few aspects which are connected to the failure classification and Stress test planning:
·         Assessment of risk when selecting risky flows for multiple experiments.
·         Taking in account the impact of the product on the system stability.
·         The need to find good oracles beyond the official requirements when defining the load and stability targets.
·         Perform load tests of few types :
o   The largest amount of data or actions which has meaning  for the users
o    The full capacity of the product – in order to spot degradation in the capacity before they has impact on the users.
·         Use good logging mechanism to gather data on all the experiments that your stress performs.
·         Monitor the system resources in order to quickly find stability issues .
Since I was scheduled to present on the last day of the conference, I had some time to get inspiration from a few people that I met during the 1st two days. The night before the presentation, I changed the summary slide from a list into a mind map that summarizes my takes on the subject.

I am publishing the mind map and would like to ask you to review it and contribute to my initial work on that. I promise to give you credit if you’ll provide meaningful input.

Click to enlarge

Thursday, May 30, 2013

Notes from the 1st conference at Ben-Gurion University of the Negev: Software Quality and Testing - Academia and Industry Get Together



I participated in the conference and would like to share my impressions and notes.

I really liked the idea of having such a conference, to get some exposure to what’s going on in the academic world regarding SW testing. This, together with promising talks from industry practitioners, was a good reason to head south and participate.

The hosting was great and the organizers deserve a good word for their efforts. The conference was held in the university “Senath auditorium”, which is a very pleasant and cozy (although well air conditioned) place. Everything was well organized.




The first talk, after a few greetings was by Dr. Jasbir Dhaliwal from the University of Memphis who talked about his experience in collaboration with the industry. He established the Systems Testing Excellence Program (STEP) in collaboration with Fedex.
He talked about the challenge of collaboration between the scientific approach of the academia and the art approach of the industry practitioners, which he called the Science – Art gap. 
During the day, several speakers referred to the different approaches between the industry and the academia, where the academia is focused on verification and the industry is more concerned with validation. 

Nir Caliv, a Principal Test Architect from Microsoft Israel, talked about Continuous Delivery and Test in Production. He talked about how the shift from producing boxed software packages to services in the cloud changed their development and testing methodology. For example, shifting from long development and test cycles that ends in mass distribution to very short ones that reach the production environment very fast.  He talked about the impact of the need and ability to change the SW quickly. He mentioned the move from modeling the system to using the “Real thing”, and from simulation of the end-user environment to monitoring of the production service itself. Monitoring gained a much more significant place than in the past and “Quality features” that enabled this monitoring were added to the product. The analysis of the monitored data become a major role of the “SW Development Engineer in Testing” as Microsoft calls their testers. They changed from passive monitoring to data mining that involves machine learning. 
It seems that when the monitoring capabilities role changed from being a “luxury” to a “necessity”, it forced them put a large effort on this area. This being done can teach organization whose product nature has not yet made the move to look at the benefits of this direction for their testing, and invest more in monitoring capabilities and gathering data from the production environment. 


In the lobby, Dr. Avi Ofer from ALM4U displayed a demo of a new method for verification using temporal logic . This method started with Hardware verification, but he shows how it can be used in Software verification, including the ability to go back in time and debug by replaying the actions without running the debugger again.


Dr. Meir Kalech from the Ben-Gurion University talked about “Artificial Intelligence Techniques to Improve Software Testing”. In his talk, he described a project that uses model based approach to diagnostics in complex environment like software, based on the IEEE Zoltar toolset for automatic fault localization. His system suggests scenarios for failure isolation. While he suggested that the system be used to suggest the “next step” to testers that experience a failure, my view is  that this is more suitable to be an addition to automation checks  than as a tool for human beings.

Ron Moussafi from Intel talked about "The Fab Experience: What Intel Fabs can teach us about Software”. Ron has a lot of experience in the semiconductors manufacturing world, but he is currently managing two test departments, one of them,  I work in. He talked about the difference in the approaches of the two industries. While the fab(Semiconductor fabrication plant) has a culture which focuses on quality, and quality is a main factor that is measured with no tolerance for incompatibility, the software industry culture is focused on other aspects. One of the causes of the difference is that in the manufacturing world the cost of one error is very noticeable, while in the SW industry, most of times the cost of error is hard to measure, but has the impact of “death by a 1000 papercuts”. While “Fab” people see themselves as scientists, the image of the SW guy is more of a “Lone ranger”. Ron suggested some actions to improve the quality culture of SW development organizations, like calculating and publishing the cost of defects and connecting between failure and the cause, in order to improve the process, instead on just focusing on fixing the problem.

Prof. Mark Last from the host University talked about Using Data Mining For Automated Design of Software Tests. He showed a method that does data fuzzing on a legacy “known to be good” software and uses data mining techniques in order to select a regression package and run it on a new version of the software.

Prof. Ron F. Kenet from KPA Ltd. talked about "Process Improvement and CMMI for Systems and Software". Since I am not really interested in the subject, I did not take notes on his talk. However, one of the examples he gave did grab my attention as it can be connected to one of the other talks. He demonstrated measurement of point of fault during the development process, which can be useful if you want to try Ron Moussafi’s suggestion to connect between failure and cause.

Dr. Amir Tomer from the Kinneret Academic College talked about "Software Intensive Systems Modeling - A Methodological Approach”. He described his method of using UML to describe SW systems. In his method, each entity has four characteristics: The ones that relate to the requirements from the entity: the environment it operates in and the services it provides, and ones that relate to the design: the structure and the behavior. He showed how he connects between the entities so the services and behavior of one entity, are actually the environment of the other entity.

Scott Barber is a well-known tester, especially if you are interested in performance testing. With no offence to the other speakers, he was the main reason for my participation in the conference.  Scott’s talk was "A Practitioner's View of Commercial Business's Desired Value Add from Software Testing”. He was able to educate not only the academic participants, but also the industry people on what is the view of the Commercial Business's decision makers of Testing and the challenges of explaining the added value of testing to them. This can be quite difficult when they think about testing as an additional thing they have to pay for having software built, like caffeine is needed, and don’t distinguish between Testing and quality. Scot’s said that “the industry pays for value to get to the market sooner, faster and cheaper”, these are the things that you want to focus on and make sure to communicate your contribution to their achievement.

I was not able to participate in the panel discussion afterwards “Product Quality and Software Quality:  Are They Aligned?”

To summarize, the conference was very interesting and exposed the participants to different views of Testing. While, as practitioner I was not familiar with the language and approach of the academic speakers and I have to admit that I giggled a bit when they talked about a “lines of code” or gave an example in FORTRAN, it was refreshing to hear such different views. 
The academic engagement with Testing is in its early phases which mean that it has lot of space for great things to happen. This conference was a big step in this direction and a good way to connect between people from the two disciplines. 

As an Intel employee, I am proud that Intel was a major sponsor of such a great event, which was open to anyone who has an interest in our field.




While the Verification aspects of the profession were well heard, I was missing academic speakers from other disciplines which have no less impact on our practice and I would argue that even more. Cooperation with researchers in the areas of Psychology, Anthropology, Education and more, may able the academia to address more of the industry’s needs and be more relevant to the industry.