Monday, 1 February 2016

Effective User Story

Introduction
A BA, "i.e. Business Analyst", is someone who can conduct minds of business to minds of developers, he is a SME "Subject Matter Expert" in the area of requirements,  he can evaluate business model against technology, he can ask and answer tough questions to elaborate needs and requirements, he usually thinks in product, team, service, revenue, finance, profit, profile, business plan, projects, market, idea, sales, strategy, innovation, customer, goals, management, opportunity and competition. He can capture, clarify and confirm user stories.

<<As a BAI can Author user-stories based on five rules, To simplify Elaboration process>>


This article is all about writing effective user stories, so let us talk business analysis and discover an approach that may help us more when we get to play a BA role in a project. User stories, are proving to be a phenomenal tool in a BA's tool kit for defining the business need. The user stories have rapidly become one of the most popular form for expressing stakeholder's requirements on projects whether the project follows Agile methodology or any other traditional methodology.

Some people like Ellen Gottesdiener, has different opinion about user-stories, she considers a user-story as not a kind of requirements but it is a way to enrich conversations in order to elaborate wants into needs and then into requirements, she believes the real requirements is the acceptance criteria that must be in context to be communicated with stakeholders and the team.

However, in this article I will try to discuss how to write effective user stories, that express business needs and minimize misunderstandings. The effective user story "in brief" is simple, complete, well structured, understood and measurable. Down here, I will try to write in details about this effectiveness phenomena, from a view point that I captured out off Henrik-Kniberg's book "Lean from the Trenches"Tom's BA-EXPERTS channel and PMI's Business Analysis Virtual Conference 2015.

User-story Components
The user-story has a known mold with three popular fields, the "As a ....." field, which is used to express the role that the author wants to represent. The "I/we ....." field, which is used to state the feature, the ability or the functionality that is needed for something with certain qualities. The "To ...." field, that is used to state the goal or the objective that the author wants to reach.

<<As a {role}{can or do or have something with featured qualities}To {achieve a business goal}>>

Those are not the only dimensions of user-stories, where Ellen Gottesdiener has defined seven dimensions that are User, Interface, Action, Data, Business rules, Environment and Quality-attributes, where she classified Interface, Environment and Quality-attributes as a non-functional dimensions, but the remaining dimensions are functional.

However, If you follow the user story paradigm faithfully, you will capture your user-story on the front of a "3 X 5" index card, but you can use any other tool to capture it, at the end you must make sure to constrain the length of the user story to 1 or 2 brief sentences, and you have to author one of the two widely recognized molds:

<<As a {role}, I can {do or have something with measurable qualities}, To {achieve a business goal}>>.
<<To {achieve a business goal}, {role} can {do or have something with measurable qualities}>>


Let's take some examples using the first mold:

<<As a Registered Student, I can download all training that I need, To study for the exam.>>
<<As an Arabian, I can differentiate Arabic sounds, To comprehend what others are saying.>>

The first example gives the developer great guidance while the second lacks for that value. As a result we may need a sort of guidelines and rules to go beyond the mold and to write effective user story.

Rule #1 <<Keep user story simple>>
This simple rule states only one thing! however, it makes it the best; it just advises you to not express too much in a single sentence in order to avoid confusion, by just keeping your various flavors in various user-stories

Take this complex example:
<<As an applicant, I can navigate to the coverage screen, enter personal and vehicle data, and submit the application online, To request automobile insurance coverage>>

It will be clear if you express this multiple flavors user-story as:
<<As an applicant, I can navigate to the coverage screen, To select the insurance coverage I need>>
<<As an applicant, I can enter personal and vehicle data, To compare premiums>>
<<As an applicant, I can submit an application online, To request automobile insurance coverage>>

This user story, contains compound sentences, which are by definition never simple, that means you should avoid using "IFs, ANDs, ORs, BUTs ... etc" in a user story. The word "AND" can be a sign to a compound sentence if the sentence contains two phrases that contain an action "verb" and a subject or object "nouns", but the usage of the word "AND" to create a list of items with common characteristics "e.g. Broccoli and Spinach and Garrote" is not considered compound sentence. So, the conjunction "AND" in the phrase "and submit the application" is being used to create a compound sentence while it has been used in the phrase "and vehicle data" to create a connected data.

The following example also shows another form of compound sentences:
<<As an underwriter, I can override a coverage denial for an applicant to increase our customer base unless the denial was due to base credit in which case I can confirm the denial, To protect our customer base>>

It will be much clearer if we phrase it as two user stories as follows:
<<As underwriter, I can override a coverage denial for an applicant with good credit, To increase our customer base>>
<<As an underwriter, I can confirm the coverage denial for an applicant with bad credit, To protect our customer base>>

Delimiting phrases like "UNLESS, EXCEPT, WITHOUT ... etc", commonly create a user story with two different phrases of two different goals, and by expressing each goal in a separate user story the intent and the purpose of each becomes much clearer.

Following this rule facilitates user story elaboration. Elaboration process, is the process that your developers should use to ensure that they understand the story and they can implement it.

Rule #2 <<An effective user story emphasizes what should be done not how to do it>>
Builders and particularly developers have been asking the business community "the customers" to tell them what they want to build, not how to build it. After all, the "HOW" is relating to the domain of the builder community, it is their job, so, this rule is all about the "NOT HOW"!

As a BA authoring a user-story, you should focus on business results and avoid thinking about preconceived solutions or how to achieve them, that is because your developers team will simply implement what you asked for, without considering better alternatives, which is called "solution trap"  or "HOW trap". Thinking in business results includes but not limited to "business plan, team, projects, market, idea, sales, profile, strategy, innovation, customer, goals, management, opportunity, competition .. etc". Let the developers think about how they achieve the results; given your parameters. You can do this by thinking about the destination instead of the journey. See this Example:
<<As a passenger, I can select my destination from a drop-down box, To avoid entering an invalid city name>>

This complies with rule#1, so it is simple, but expressing that way is limiting the developer choices, it is a preconceived solution because it assumes passenger selects a destination from a drop-down box, it is better to rewrite this user story as follows:
<<As a passenger, I can submit a valid destination, To ensure correct booking>>

Briefly, rule#2 says: "As effective user story expresses WHAT should be done, not HOW to accomplish it", by doing: (1)Avoiding preconceived solutions (2)Describing the business result not the technology needed (3)Describing the destination not the journey.


Rule#3 <<An effective user story is relevant to the project>>
This rule is all about the content of the user story, the thing that your project will deliver, you have a couple of options for delineating what the project can and can not do. (1) The project charter and (2) The project scope statement. A project charter typically describes the project in general terms at the time that the scope statement is usually much more specific. So, while the charter can be some what vague, the scope statement typically specifies business processes, functions, organizational units, roles or jobs that the project might affect or influence. Scope statement can also specifically excludes certain components to further clarify what the project will not do.
In an Agile environment, the agile team should ask to elaborate the story before they are ready to start coding. In a conventional environment the business analyst will do the same before publishing a BRD "Business Requirements Document". In either case, discovering that a user story is not in scope of the project in that time is unsettling to say the least, there is no cheaper time to change than right this time "prior to coding".  If you express your user story so it is relevant to the project, then you wast less-time riding, elaborating and discarding them, you also avoid plotting the backlog or the BRD with user stories that the project will not implement, which save considerable time for the entire team.

A more difficult to discover in a user story is its tail, the tail of a user story encompasses the consequences of its implementation because of incorrect assumption, those assumptions that are relating to the BA's awareness of law, environment, facilities, hierarchical organization, stakeholders, workflow and decision powers. Consider what the user story could cause of cascading change that exceeds the project charter or scope.

To summaries rule#3, it reads: "An effective user story targets components that are relevant to the project in that it falls with the charter or the scope statement, it defines something about the solution that the business community needs or wants and has a short tail, does not create a cascading effective changes that exceed the project authority"

Keeping your user story relevant will save you and the project time and money, and make you too much confident within the IT department.

Rule#4 <<An effective user story is clear, understood and not ambiguous>>
The major obstacle to effective communication is ambiguity. You will create ambiguity when you use terms and phrases that different members of your target audience will interpret it differently. If your project is using an Agile software development approach the project team will address ambiguity "conversationally" during the elaboration of the user story, however, even in an Agile scenario it can be beneficial to remove an ambiguity earlier. The less ambiguity you have the phrasing of a user story at the beginning, the more likely it is that the solution will deliver what you want with minimal cost. What causes ambiguity in the first place, is you as the author think that "it seems so simple" that the rest of the word should understand it right away.

Because you know what do you meant when you wrote the user story, so, it is difficult for you to identify ambiguous phrases, and you have to switch hats to identify them, where, you need to be the reader not the author. That is by simply read your user stories and try your best to misunderstand them. To get the biggest effect from this exercise it is recommend for you to perform the critical review in a different environment than the one you wrote the statements. For instance, if you wrote the user stories in the morning, review them on the afternoon, or visa versa, if you wrote them on your desk then review them at home or on your computer. By changing the time or the physical environment, you might change your perception of what you wrote. This activity is called "Desk Checking Discovering Ambiguity", and you might be amaze that how much ambiguity you can identify when you really focus on it.

What about others?, since the user story is a fundamental tool of communication between the business community and the IT team, and it forms the foundation of a future IT solution, then, you might run this check by a colleague, a peer or your manager to get their tick on it.

One of the best ways to test whether or not someone else truly understands a user story the way you intent, is by asking him or here to rewrite it, and not use any of your words, except for certain articles "a, an, the", prepositions, pronouns and conjunctions. Specifically, the other person can not use any of your nouns, verbs, adverbs or adjectives of the user story. This little exercise actually forces the other persons to think outside the box, it forces him/her to use terms that are different but mean the same thing to them. If you can read their user story and you both agree that it still means the same original user story , you can feel a lot more confident that you are getting your point across. If however, you have to ask them why they use that specific word, that means something different than you intent and should be a red flag to consider revising your user story and make sure that the two of you agree on a common meaning. If you are going to try this little technique, two words of advice, (1) Due to the physical structure of our brains, different genders often think differently, you will get better feedback if you pick someone from the opposite sex to interpret your sentence, (2) In addition different job functions requires different thinking styles, so you might ask a developer or designer, someone who would later actually have to understand the sentence to do the rewrite. Following those two recommendations, should drastically improve the quality of your feedback and the entire process of assessing the ambiguity in your user story, which will definitely improve the quality of the delivered solution.

So, rule#4 says: "An effective user story is easily understandable, unambiguous and clear to all target audiences". Removing ambiguity is definitely a first step for improving communication between those who want a solution and those who deliver it.


Rule#5 <<An effective user story has measurable non functional requirements>>
Developers could simply accept the user story as expressed or more correctly as they interpret, then they implement it with immunity. But, if they challenge the statement before they start development, it might change, and imagine what? there is no cheaper time to change than the time prior to coding.

Following the user story paradigm, developers are concerned with any details about the user story until they initiate the development process for the selected user story. At that time they should schedule sometime with the author of the user story to dive into the nitty-gritty details "Elaboration Discussion". During this discussion the developer will explain on the back of the index card, or any appropriate tool, how they can proof the solution meets author's needs, once they deliver it. If the measurable quality of the user story is expressed in specific numbers, the discussion can focus on why this particular number is important, how much leeway "if any" is there in the number, and possibly, how exactly the quality will be counted.

Non functional requirements are one of the biggest issues we face in the world of business analysis, they are the most commonly missed, misunderstood or misrepresented types of requirements. It is not enough to know what you want, you need to be able to specify in measurable terms how a third party "developers" can deliver what you want, the latest possible time when you have to define a measurable dimensions is immediately before the developers start coding. You should prepare yourself however by thinking of this in advance. If your non-functional requirements are not objectively measurable, you need to revise, rewrite or expand them.

Measurable qualities define acceptable behavior of the system from the user perspective, the challenge is that there are two categories of measurable qualities (1)Objective Measure, (2)Subjective Measure. Objective measures contain numbers such as "10,000 transactions per hour" or "One second response time" or "6 packs", which can be objectively measured and validated by a third party. Subjective qualities, like "easy to maintain", "high quality", "good sound", by definition can not be objectively measured, these subjective qualities are valid from the business perspective as they consider it a performance needs, but to be usable in a user story you need to clarify them.

As examples of non functional requirements that you might need to define, whether you use user-story or any other form to express your requirements:
(1) frequency, how often the people playing the identified role need this user story?
(2) Urgency, how quickly does the application have to respond to a users needs?
(3) Volume, how much business data will the application maintains for this user story?
(4) Accuracy, how precisely and timely does the data have to be, from a business perspective?
(5) Usability, what features make the application easily usable by the role?
(6) Learn-ability, how quickly can your user in those role learn how to use the application?
(7) Flexibility, or scalability, how fast you anticipate frequency and volume to change?
(8) Reliability, how critical is it that the application does not fail?.

The key point is that the business community needs to define these qualities from the business perspective before the project move into the development or purchasing phase. The technology can achieve almost any goal you can think of, but only if the business community defines the goals and measurable terms in advance that they have a legitimate right to expect and that the delivered solution will meet their needs.

The question is not whether you need to specify measurable qualities in  the user stories, but rather, whose responsible for them, the business-community, the business-analyst, or the developer-community. The correct answer is, all three!. Developers are obviously involved in getting the technology to work fast enough to meet the highest expected traffic, but it is the business communities job to anticipate just how high that is "e.g. we are planing for an annual 15% growth rate", and the one wearing the business analysis hat, is responsible for capturing clarifying and confirming user stories, he also has to ask the tough questions, to make the business community aware of the undefined or unmatched qualities.

The Assumption "i.e. non-functional requirements will take care of themselves" has proven to be a high risk endeavor for many organizations, so take care!

Monday, 18 January 2016

S.O.L.I.D principles

Introduction

The principle; is a high level idea that born out of various minds and brains, it is recognized based on human experiences and experiments, it inspires other folks when they try to take similar actions or decisions. The critical thinking; is the curiosity and carefulness of providing full fledged set of solutions free of mistakes, errors, accidents and problems.

SOLID are a group of principles that guide critical thinking of software developers and programmers while they are developing a software code. Nevertheless, being aware of these principles guarantees a common understanding between developers about the decisions they usually made in the pattern body of their code. As result, a group of design patterns have come out of those principles, which progressively became popular and known to be best practices, currently there exist a set of known design patterns based on SOLID principles, they are out of the scope of this article, and I'll write about them latter on.

In this article I will try to talk a while about each principle and spot light on some practical examples that are already existed in .NET framework.

S: SRP Priniciple <<a class should only take care of only one responsibility>>
SRP is an abbreviation of "Single Responsibility" principle, this principle tells the developer to write his class for only one and single target, his class should take care of only one responsibility, it should has only one reason to change, it should does one thing and does it very well. So if it is an entity class, then it should only contains properties to hold values, without performing any business functionalities, data persisting functionalities or even UI functionalities. In case it is a provider class, it should only contains the persisting functionality of the data, like opening a connection with the DB server, executing SQL-commands or preventing security threatens on data. This principle is essentially required so that you have a clean code that is easy to be maintained by other developers. They will find it readable and easily approachable, they can move from generic-function to another without losing concentration of their purpose. They will find it also reusable, so your class will help others to reduce effort and consequently reduce the overall cost. One example of S-Principle in .NET framework is the "Math" class, which is a static class that can't be instantiated, its only purpose is to provide a collection of mathematical functions, these functions are available to be used in any context. Actually, any class in .NET framework is eligible to satisfy the "Single Responsibility" principle.

O: OCP Principle <<a class is opened for extension but closed for modification>>
OCP is an abbreviation of "Open Close" principle, this principle tells the developer to design his class so another developer "may be himself" is not allowed to modify its functionality but he is allowed to extend it. The class should be opened for extension but closed for modification. The creator-developer can use many techniques to provide such facility, like virtual methods, callback functions, events, delegates, actions or lambda. The user-developer in the other hand-side can use the corresponding techniques to extend the class functionality, like higher order functions technique, which is a functional style programming that allows him to pass his extension-functions as a parameter to the class's methods, so the class's methods will call his extension-functions to modify , format or evaluate the values of its properties. The .Net framework has provided the concept of the Extension-Methods to support "Open Close" principle, which allows us to extend any class using simple coding-style with static class, public static functon and "this" keyword. The most populare example in .Net is LINQ that extends IEnumerable with large number methodes like Where(), Join() and OrderBy(). IEnumerable is "closed" to be modified but is "opened" for extending by LINQ or any custom extensions.

L: LSP Principle <<child class object should be able to replace parent class object in any context>>
LSP is an abbreviation of "Liskov Substitution" principle, this principle is introduced by Barbara Liskov "American scientist" in 1987, the principle emphasizes on the deep relationship between the suppertype and its subtype, so that each of them should respect the state of the other, the subtype "derived class or child class" should respect the state of the suppertype "supper class or parent class". If a developer do respect this principle then he can safely use the subtype's object in any context where suppertype's object is expected. So, this principle is all about a parent class object that can be able to refer his child object during run time without any problem, and the child class object can replace the parent class object during run time with any problem. The most popular example in .NET is the "Object" class that is the parent of all classes in .NET, the Object's methods are available in every class such as "GetHashCode(), GetType(), Equals(), ToString() and Finalize()", and any class object can change the behavior of any of these functions using "override" keyword, but they can't change any certain state of the type "Object", so that the child object safely replaces "Object" type in any context.

I: ISP Principle <<clients should not be forced to depend on methods that they do not use>>
ISP is abbreviation of "Interface Segregation" principle, this principle tells the developer to simplify the role of each interface, in order to keep the interface untouchable in front of any change requests in the future, and clients remain stable once they build their systems depending on this interface. This way, we will have different interfaces for different roles, the developer implementation may be accumulated in one class that inherits from multiple interfaces, hence, the client will not be forced to use an interface if he does not need it. It is all about identify small roles instead of one generic role, and assign every little role to an individual interface so we don't define too much methods for any individual interface. In .NET for example, we can find interfaces like IEquitable<T>, IComparable<T> and IEnumerable<T> that has only one method each, which are Equals(T), CompareTo(T) and GetEnumerator(). It was possible for the three functions to be accumulated into one interface, but that will enforce clients to depend on methods that they will not use. Being segregated is more easy to extend any one of them and left the others, it is also possible to implement or inherit one of them or more.

D: DIP Principle <<high level modules should not depend on low level modules, instead both should depend upon abstractions>>
DIP is abbreviation of "Dependency Inversion" principle, this principle tells the developer to give himself a freedom to switch or swap dependencies during run-time or compile-time instead of tightly coupling between modules or classes at design-time, that will only be achieved by abstracting the dependency between modules, so if we have the screen and the printer as output modules then we can make our copier module depends on abstracted output module at run time that will be assigned one of them to copy on. This is sometimes named dependency injection and is important to develop loosely coupled software systems. The .NET has introduced "Generic & Interface" as a very efficient techniques for dynamic dependencies whether at compiling time or at run time. That way we can have a reusable and extendable modules that are convenient with testing as well.

Saturday, 16 January 2016

Clean Code

Introduction
In this article, I want to discus clean-code as a modern approach to build elegant software systems, and to spot the light on the main rules, practices, behaviors and knowledge that any developer needs in order to write clean code. I have been influenced by Jeremy Clark's ideas that he spreads in his sessions on YouTube, where he sets many valuable practical advices along with clear practical guide.

Clean is a Rule
"Try and leave this world a little better than you found it." - Robert Baden-Powell Rule -
"Always leave the campground cleaner than you found it." - The Boy Scout Rule -
"Always leave the code cleaner than you found it." - Clean Coder Rule -
"Always check a module in cleaner than when you checked it out." - Clean Coder Rule -


Clean Code
The clean code, from Jeremy Clarck's view point, is the code that is readable, maintainable, testable and elegant. Some people like Jeremy, doesn't favor the title "architect" to be hold by software engineer, that is because software industry is so different than architecture industry, software has particular aspects which don't exist in other fields, for example bug fixes, business changing, enhancements and new functionalities. This specialty of the software industry needs developers to be careful to write clean code because of the known truths:  (1)clean code saves time (2)Any code has at least five years life-span if not ten or more (3)We can't take a short term view of software. 

What really prevents developer to write clean code is he usually says "I will clean-up it later" and the other famous word of administrators "Ship it now!", So it is too important to avoid all prevents including ignorance, stubbornness, short-time syndrome, arrogance, job security and scheduling. The developer doesn't have to resolute adherence to his own ideas or desires against writing clean code. A developer makes mistake when work reaction instead of action and when feeling own importance and reject cleaning to his code.

The best way to convince yourself to write clean code is to imagine that the other developer guy who comes after you is a homicidal maniac who knows where you live :)

Intentional Naming
It is too important to get your intent to other developers, the thing you should focus on when coming to give a name for an object or a method. A not very good example of an object name is "theList", where at the time that other developer is going to know that it is a name of a collection, he doesn't know what is the type of items inside and what is the purpose hidden behind this collection. You didn't pass him any description to imagine your imagination, it is all a matter of what is the intent you want to pass for him to work with your code. A good naming example is "ProductList" but the more elegant one is "ProductCatalog".


Naming standards are something like "CamelCase, UpperCamelCase, lowerCamelCase, PascalCase, lower_Case_Underscores or Upper_Case_Underscore". It does not really matter which naming standard you choose, what really matter is to choose any and immediately use it, just have standard and be consistent.

You should also use "nouns" for naming variables, fields, properties and parameters, for example, you can use names like "indexer, currentUser, priceFilter". Moreover, you can use "verbs" to name methods and functions, for example you can use names like "SaveOrder(), GetApplicableDiscount(), FetchCustomerData(), RunPayroll()". But, names like "recdptrl" is not only ambiguous but they are also difficult to be pronounced as well, so it is more better to make it easier when you type it, for example "recordDepartmentRole" or "receivedPatrol".

Comments
Comments should not be used to tell what code does, but if you need to explain what code is doing then you need to rewrite the code to make the code more clear, however, we can use comments to describe intent or consequences of code, and we have to avoid lies of comments, where it is truth that we update or move the code, but not the comments, so comments lie and you should avoid that by explaining yourself in code and rewrite the code if it is unclear. 

A good comment is the one that describes intent or clarification such as "//Product price of one month ago", gives warnings or consequences such as "//we do ... to make sure that ..." and mentions ToDos such as "//Magic variables should be moved to the configuration file" that is temporary and must be removed when the task is completed. 

A bad comment is the one that plays the role of other tools such as "journaling" comment that plays the role of source control, for example "//Wael | 20 Apr 2015 | Fix Bug No####", this is what source control for "i.e. who, what and when", it is recommended to know your tools and make use of them. Avoid "noise" comments such as "//default constructor".Avoid commented code and just delete the code that is no longer in use, it is the role of source control to retrieve old code back if you need so.

Refactoring
It is all about making code better without changing the functionality of the code. Look to your code and ask yourself, is this bad code? you may say no it is good code and no wrong with it, but it may not feel good for others, they may see it a little bit difficult to understand from the first time, they probably need time to scan it to know what it probably needs to do, and that what you would expect from any developer who would work to that code at the first time. So, ask yourself again what you can do to make this code more easir to work with, because after six months from now someone would going to ask you to make a change, and you might not exactly remember how it works, so you have to look to the code to figure it out "at least" to yourself.

Unit Tests is essential, if you do not have unit tests, you don't really know what your code really does, if you do not know what your code does, you can not safely refactor it, refactoring step one is to bring your code under test, refactoring step two is to safely and confidently update the code.

However, you should think in doing something for your code, like hierarchical functions, which is the concept of categorizing functions into "high, mid and low" levels. Go and start with the high level function that is the basic big chunk of what you are trying to implement, then drill down to the more detailed functions that will have the actual functionality. Then, refactor your code by extract methods out of the large blocks, you can use "Ctrl + [R | M]" to do so in visual studio or right click the editor and select "Refactor >> Extract Method". Make sure the new refactored methods is understandable from its name. A fast way to make sure you did the right refactoring is to run the unit tests for each refactoring step. You can even enable run test with each build operation by clicking "Run Test After Build" button on the top of the "Test Explorer". Having unit test in place makes sure we don't accidentally change the functionality of our code. The unit test naming convention you are using will help much finding where exactly the error happen. Roy Osherove, who wrote the "Art Of Unit Testing", recommended a naming scheme "model_operation_result" that slices unit-test-name into three parts, the first is for the model under test, the second part is the under going action and the last part is the result that we are expecting.

The refactor "Dry Principle" says "Don't Repeat Yourself!", it means don't copy, past and modify code but create a common piece of code to be reused in multiple contexts. If you have two sections with the same code they really need to combine, so that we elements the duplication of the code.

Visual studio automatically handles indents when you press "CTRL + [K | D]", that way you can easily see at a which level every statements is placed.

Use "#Region" to organize your code and specify one region for every similar parts of the code, such as fields, properties, constructors, public methods, private methods, events and notify changed members. That way we will have everything very well organized and we can watch them grouped on solution explorer too if we open the "file name >> class name". What we will see in the solution explorer is all the members of the class where everything is grouped together the same way we already did by "#Region".

Using refactoring techniques that way leads us to tell what our code is doing at a high level, and if we need to drill down to details, then we can do easily. Remember, we start with "not a bad code" and here we ended with something that is "very easy to walk up to very approachable", so six months from now if anyone asks you to make updates to that code, you can walk up to it and remember where you are too quickly and navigate to where you need to make these changes. 

Books To Read
Clean Code - Book by Robert Cecil Martin
Working Effectively with Legacy Code - Book by Michael C. Feathers
Refactoring: Improving the Design of Existing Code - Book by Martin Fowler
Refactoring to Patterns - Book by Joshua Kerievsky

Friday, 15 January 2016

Visual Studio 2013 Productivity Power Tools New Features

With the release of Productivity Power Tools 2013 for Visual Studio 2013 come eleven new powerful features. Some of these features are brand new extensions, while others are improvements to functionality found in previous versions of Productivity Power Tools.

For full details of the features, please refer to the Productivity Power Tools 2013 entry.

In this post I'll explain what I have learned from a video that introduces these new features such as quick tasks, recently closed documents, peek help, HTML copy and others.

Peek Help "Alt + F1"
This feature allows you to fetch the MSDN documentation for types and display it online inside the editor, you can right click on any type on the editor, and click on the "Peek Help" command or press "Alt + F1" key to fetch the MSDN documentation for that type and display it online in a full browser control, then you can navigate to other links, you can even do things like find, you can click "Promote To Document" button to open this link in a real browser or you can hit ESC to go back to your editor. This feature extremely useful to know about new type without having to leave visual studio.

Solution Explore Squiggle
This feature shows squiggles "red underline" on files on the solution explorer when there are errors, warnings or messages. You can hold the mouse pointer on this files to get quick view of all these errors, then double click on any item to quickly navigate to that point inside the editor. You can also filter the solution explorer by clicking "Error Filters" combo-box or press "Ctrl + L E" to just get a list of files that has the error. This feature extremely useful to get a quick view of the health of execution without having to open the error list.

Block Structure Visualizer
This feature creates marker in the editor corresponding to different blocks in your code. For example, you may see different vertical lines with different colors for different blocks in the code such as a vertical line corresponds to namespace, class or for-loops. Lines and colors differently depending on the type of the blocks and if the starting of the block is scrolled up to the screen it gives me a preview about the block when I hold the mouse "hover" on its corresponding line for a while. This extension can be relay helpful to understand the structure of your code base when it scrolled out of view.

Double Click To Maximize Windows
This feature allows you to double click on any window inside visual studio to quickly maximize it to full screen. You can double click on the full screen again to dock it back into the original position. The feature works not just for tools windows but even for documents, so I can double click on a document to quickly maximize it to a full screen, and double click again back to my original position. This feature extremely useful if you want to get quick full screen views of certain things inside visual studio.

The Time Stamp Margin In The Debug Output Window
If you open the output window and open the debug section, you will have time stamps corresponding to different debug messages. The feature is very useful if you have a very large number of debug messages to understand the times each of the message was created.

Quick Task Extension Improvement 
This feature is introduced in the previous versions of power tools. It helps if you want to turn line numbers on in the editor, pres "Ctrl + Q" and simply type "linenumon" to immediatly turn on line number inside the editor. We can search for all the tasks available to you by typing "@tasks e" to list all the tasks provided by the quick tasks extension. There are over 32 tasks from which you can chose. The most popular task of them is called "PresentOn", which pumbs up the editor and the environment font to optimize layout for presentations "i.e. adjusts font for code presentation". A popular feature we had for the "PresentOn" extension was to allow people to customize the font sizes and font families by typing "PresentEdit", which allows you to open an XML file and customize the font family and font size you want for your "PresentOn" task.

Document Table Extension Improvement
It colorizes document-tab depending on the file that the tab belongs to. You can customize document tab by clicking on the "Customize" option in the menu that pops on when you click the arrow button at the right had side of the tab ribbon. Where you will find an option called "Show icons" under "Productivity Power Tools >> Custom Document Well >> General". By checking this option it shows you icons right on the document tab. So, if you a solution with a large number of languages or different types of files then this extension can really help you landing down on the file that you are looking for.

Go To Definition Improvement
It is feature, which allows you to navigate to definitions by "Ctrl + Click" on a symbol. It is made easy by opening a peek view to show the definition inside, right below the symbol, of course it is an option for you that you can customize the symbol definition the way you want inside the peek view. 

HTML Copy Extension Improvement
This extensions allows you to copy your code as HTML and past into your blog site or any HTML editor. Open Edit menu and select "Copy Html Markup" to copy your select lines of code as HTML, If you have an editor that doesn't support WYSWYG you can directly past the HTML content inside it. So as you can see the code was copied along with HTML tags.

Undo Close Document
If you closed a punch of documents in the document tab you can click on the "File" menu and select "Recently Closed Documents", which allows you to open any recently closed document.

Match Margin
It is a feature that highlights all the text matches for any token in the editor, so if I place my caret on "System" token, it highlights all the occurrences of "System" in my file as well as the scroll-bar, for example, if you click on "Public" the scroll bar shows you many markers corresponding to the text "Public".

Monday, 26 October 2015

Microsoft Outlook Is Impacted by Gmail's New Security Standard "OAuth 2.0"

Email-clients normally use security standards to access email accounts on email-servers, Gmail "15 Jul 2014" has decided to increase security measures to stop vulnerabilities, the old security standard that is being used by Gmail and also by many Email-clients is called "Basic Authentication", clients used to use this standard to send passwords in the form of plain text to Gmail servers. Gmail has launched a new security standard called "Open Authentication" or "OAuth 2.0", that doesn't accept plain-text passwords any more, so many clients now suffer accessing Gmail accounts.


Good news is, Gmail allows account's owner to disable "Open Authentication" and enable "Basic Authentication" once again to enable less secured clients work normally, this page has these enable/disable options.

A Microsoft article explains that "Google has increased its security measures to block access to Google accounts after July 15, 2014 if those accounts are being set up or synced in apps and on devices that use Basic Authentication.

A very good post snoops around the impacts happen according to Gmail decision.

Gmail redirection error page is so ambiguous regarding that popular impact, and has no link for the enable/disable page issue, but Gmail sent me a detailed email to tell me about "sign-in attempt prevented" happened by Outlook during my trial to add my Gmail account, the email has included the following statment:

"We strongly recommend that you use a secure app, like Gmail, to access your account. All apps made by Google meet these security standards. Using a less secure app, on the other hand, could leave your account vulnerable. Learn more."


Saturday, 8 August 2015

android x86 no network in VMWare


The ISO image downloaded from Android-x86Android is a mobile operating system (OS) based on the Linux kernel and currently developed by Google. Android-x86 is a project to port Android open source project to x86 platform, formerly known as “patch hosting for android x86 support”. The Android-x86 team created their own code base to provide support on different x86 platforms, and set up a git server to host it. it is an open source project licensed under Apache Public License 2.0.

The ISO image downloaded from Android-x86 Download Page


This article "How to Install Android in VMware" is all about installing an Android OS as a VM to test this lightweight OS. 

"No network"! is a popular problem, so read "Solve Android x86 No Network Problems in VMware" or "Running Android x86 on VMware player with networking enabled", they are all about solving No network problem. 


For more information about network adapters please read "What is "vlance" adapter?". 

Summary links:
How to Install Android in VMware
Download Page
Solve Android x86 No Network Problems in VMware
Running Android x86 on VMware player with networking enabled
What is "vlance" adapter?

Monday, 12 January 2015

Incremental sub-sequence out of random stream, a new C++ algorithm

This article is all about a problem that I've read on "hackerrank", it was a difficult problem, where I tried to solve it using a strange data structure.

PROBLEM: 
A sub-sequence of a "random" sequence/list/stream, is obtained by deleting zero or more elements from the list

Suppose, a given a list A, where every element is a pair of integers ( i.e  A = [(a1, w1), (a2, w2),..., (aN, wN)] ), then, a sub-sequence ( B = [(b1, v1), (b2, v2), ...., (bM, vM)] is called increased sequence, and it is increasing if (bi < bi+1for every i (1 <= i < M).

It is required to determine the maximum weight among all weights of all increased sub-sequences "Weight(B)", where (Weight(B) = v1 + v2 + ... + vM).




Click "here" to read more about the problem.


SOLUTION:
I have chosen "graph data structure" to help me finding all possibilities of all increment sub-sequences of the given random list/stream.




I put a directed relationship "same, lower or next" between each two nodes in the graph, the relationship between any two nodes can be considered as the relationship between the pronouns “I and it” - where “I” is any node in the graph and “it” is the next-iterative-node picked-up from the random-list loop -, this relationship is only one of the following:

It is called same if "I.s == it.s" My sequence equals its sequence
It is called lower if "I.s > it.s" My sequence greater than its sequence
It is called greater if "I.s < it.s" My sequence lower than its sequence




To construct the graph based on those relations, I used the next insertion algorithm:




Insertion algorithm:
If same, then, link it as my last same, and, give it to my lowers to recursively insert it to themselves using this algorithm.
If I don’t have any next and it is greater: then add it to my "nexts" list.
If it is linked as lower/same to my-direct-next: then link it as next to myself and also to my all "sames".
If it is my direct next: then insert it to my all "lowers" if only I don’t have any "same".




This algorithm is constructing the graph in O(n) complexity. Where it is O(M) to retrieve all elements of any sub-sequence from the graph, where M is constant and it is the longest length of a sub-sequence.




The C++ code has an additional step to determine maximum-weight directly during inserting in a fast way.




Graph Node:
The following figure illustrates graph node as a concept:













Example:
Suppose a list A={5 4 2 3 6} the next figure illustrates the resulted graph.













Suppose a list A={1 5 4 2 3 5 6} the next figure illustrates the resulted graph.






C++ Code:
#include <cmath>
#include <cstdio>
#include <vector>
#include <iostream>
#include <algorithm>
#include <string.h>
using namespace std;

//----------------------------------------------
//Generic pointer
#define _PTR void*

//----------------------------------------------
//Generic pointer
#define G_PTR void*
//----------------------------------------------
//signed data types
typedef long long int64;
typedef unsigned long long u_int64;
#define G_CHAR char
#define G_SHORT int
#define G_INT long
#define G_LONG int64
//----------------------------------------------
//unsigned data types
#define G_BYTE unsigned char
#define G_WORD unsigned int
#define G_DWORD unsigned long
#define G_QWORD u_int64
//----------------------------------------------
//Bitwise flages
#define IS_NEXT 0x01
#define IS_SAME 0x02
#define IS_LOWER 0x04


//----------------------------------------------
//Graph node to represent each element in the provided sequence
typedef struct GRAPH
{
       //b: sequnce number, v: its weight
       G_QWORD b, v; //pair of (seq,wei)


       //Node default constructors
       GRAPH() :listID(0), b(0), v(0), maxAccumSteps(1), maxAccumWei(0), maxAccumStr("") {}
       //Node copy constructors, no need for assignment operator
       GRAPH(G_QWORD ii, G_QWORD bb, G_QWORD vv)
       {
              listID = ii;
              b = bb;


              //Quick finding technique
              maxAccumWei = v = vv;
              maxAccumSteps = 1;
       }
       //Linkes to tied other graph-nodes
       G_QWORD listID; //index in the pGraph array
       GRAPH* same = NULL;//pointer to graph object that carry same sequence number but comes later
       GRAPH* lower = NULL;
       vector<GRAPH*> next;//list of pointers to allow different possible subsequences


       //Flages to stop graph-infinity and graph-creeping
       G_BYTE bFlages = 0;


       inline void SetNext(bool b){ bFlages = (b ? (bFlages | IS_NEXT) : (bFlages & ~IS_NEXT)); }
       inline void SetSame(bool b){ bFlages = (b ? (bFlages | IS_SAME) : (bFlages & ~IS_SAME)); }
       inline void SetLower(bool b){ bFlages = (b ? (bFlages | IS_LOWER) : (bFlages & ~IS_LOWER)); }


       inline bool IsNext(){ return (bFlages & IS_NEXT) == IS_NEXT; }
       inline bool IsSame(){ return (bFlages & IS_SAME) == IS_SAME; }
       inline bool IsLower(){ return (bFlages & IS_LOWER) == IS_LOWER; }


       //Max values, quick finding
       G_QWORD maxAccumSteps;
       G_QWORD maxAccumWei;
       string maxAccumStr;
       inline void PushNext(GRAPH* gNode, G_QWORD& maxSteps, G_QWORD& maxWei, string& maxStr)
       {
              //gNode is the most recent GRAPH node, it may be linked before to other nodes
              next.push_back(gNode);
              //Determine  each graph node
              if (maxAccumWei + gNode->v > gNode->maxAccumWei)
              {
                     gNode->maxAccumSteps = maxAccumSteps + 1;
                     gNode->maxAccumWei = maxAccumWei + gNode->v;//it may give me lower value than the past value inside gNode->maxAccumWei .. and that is correct decision because of gNode->maxAccumSteps is now greater
              }


              //Adjust maximum
              if (gNode->maxAccumWei > maxWei)
              {
                     maxSteps = gNode->maxAccumSteps;
                     maxWei = gNode->maxAccumWei;
                     maxStr = gNode->maxAccumStr;
              }
       }




}GRAPH;


//----------------------------------------------
//Sequence node to represent input data and needed functionality
typedef struct SEQUENCE
{
       G_QWORD len = 0; //len=N: it is input sequence length .. 1 <= N <= 150,000
       G_QWORD* seq = NULL;//seq=a: 1 <= a[i] <= 1,000,000,000, where i = [1..N]
       G_QWORD* wei = NULL; //wei=w: 1 <= w[i] <= 1,000,000,000, where i = [1..N]


       G_PTR* pGraphs;


       G_QWORD maxSubWeight = 0; //maximum is: 1,000,000,000 x 150,000 = 150,000,000,000,000
       G_QWORD maxSubSeqCount = 0;//maximum is 150,000
       string maxSubStr = "";//maximum length: 11 * 150,000 = 22,500,000,000 ~= 20.9GB


       G_QWORD weight()
       {
              AnalyzeBy_Posibility_Graph();
              return maxSubWeight;
       }


       void AnalyzeBy_Posibility_Graph()
       {
              //#define GENERATE_STR
              pGraphs = new G_PTR[len];
              memset(pGraphs, 0, sizeof(G_PTR)* len);//take care ... this function is defined in <string.h>


              GRAPH* root = NULL;


              if (len > 0)
              {
                     maxSubWeight = 0;
                     maxSubSeqCount = 0;


                     root = new GRAPH(0, seq[0], wei[0]);
                     pGraphs[0] = root;


                     for (G_QWORD i = 1; i < len; i++)
                     {
                           pGraphs[i] = new GRAPH(i, seq[i], wei[i]);
                           insert(root, (GRAPH*)pGraphs[i]);
                     }


                     //cout << maxSubWeight << endl;


              }


              remove(pGraphs, len);


              delete[] pGraphs;
       }


       void getMax(GRAPH* grp, G_QWORD subCount = 0, G_QWORD accumWei = 0, string accumStr = "")
       {
              if (grp == NULL)
              {
                     if (subCount > maxSubSeqCount)
                     {
                           maxSubSeqCount = subCount;
                           maxSubWeight = 0;
                     }


                     if (subCount == maxSubSeqCount && accumWei > maxSubWeight)
                     {
                           maxSubWeight = accumWei;
                     }


                     return;
              }




              if (grp->next.size() <= 0)
              {
                     getMax(NULL, subCount, grp->v + accumWei);
              }
              else
              {
                     for (long n = 0; n < grp->next.size(); n++)
                     {
                           getMax(grp->next[n], subCount + 1, grp->v + accumWei);
                     }
              }


              if (grp->listID == 0)
              {
                     GRAPH* temp = grp->lower;
                     while (temp)
                     {
                           getMax(temp, 1);
                           temp = temp->lower;
                     }


                     temp = grp->same;
                     while (temp)
                     {
                           getMax(temp, 1);
                           temp = temp->same;
                     }
              }


       }


       void remove(G_PTR* pLst, G_QWORD len)
       {
              for (G_QWORD g = 0; g < len; g++)
              {
                     delete ((GRAPH*)pLst[g]);
                     pLst[g] = NULL;
              }
       }


       void insert(GRAPH* root, GRAPH* gNode, GRAPH* fireWallNode = NULL)
       {
              //case1: empty nodes .. will not happen at all
              if (root == NULL || gNode == NULL)
                     return;


              //case2: same sequence
              if (root->b == gNode->b)
              {
                     if (root->same == NULL)
                     {
                           root->same = gNode;
                           gNode->SetSame(true);
                     }
                     else
                     {
                           if (root->same->listID < gNode->listID) //stop graph infinite same circulation
                                  insert(root->same, gNode);
                     }


                     insert(root->lower, gNode, root);
                     return;
              }


              //case3: the next sequence case
              if (root->b < gNode->b)
              {
                     //Me, same and lower: we are all in the same posibility-level
                     //e.g.: 1,2,3,4, 1,2,3,4, 1,2,3,4,5,6
                     G_QWORD id = 0;
                     GRAPH* next = root->next.size() <= id ? NULL : root->next[id];
                     //Make a lower firwall .. that case comes if a lower to lower will be inserted into
                     //a firewall's next .. to complex I know ..[e.g. 1 5 3 2 7 5 6 3 1 2 4 5 8 9] last 4 and first 7 has that problem
                     while (fireWallNode == NULL ? false : (next != NULL && fireWallNode->b <= next->b))
                     {
                           id++;
                           next = root->next.size() <= id ? NULL : root->next[id];
                     }
                     //decent deeper
                     if (next == NULL)
                     {
                           root->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                           gNode->SetNext(true);


                           if (root->same == NULL)//may lower exists
                           {
                                  if (root->lower != NULL)
                                         insert(root->lower, gNode, root);
                           }
                           else
                           {
                                  GRAPH* temp = root->same;
                                  while (temp)
                                  {
                                         temp->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                                         temp = temp->same;
                                  }
                           }


                           return;
                     }
                     else
                     {
                           //I have next .. give my next the control to decide for himself
                           insert(next, gNode);


                           //but wait .. it may be lower or same for my next: so...
                           if (next->b >= gNode->b)
                           {
                                  //It is lower and not a child of one of the other lowers of my next, so give it a next-link
                                  if (next->b > gNode->b && !gNode->IsNext())//walk to left .. but not deeper in that left
                                         root->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);


                                  //Take care of the longer thread and neglect the shorter
                                  if (next->b == gNode->b && next->lower == NULL)//wake to right .. no branch in left
                                         //!!!!!!!! todays short .. tomorrows long !!!!!!!!!! e.g 1 5 3 2 7 5 6 3 1 2 4 5 8 9 >> x x x x x x x x 1 2 4 5 8 9
                                  {
                                         //My next has inserted his same under his lowers, I've already linked to my next's lowers
                                         root->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                                         GRAPH* temp = root->same;
                                         while (temp)
                                         {
                                                temp->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                                                temp = temp->same;
                                         }
                                  }
                           }
                           else
                           {
                                  //If I has same with no next, don't let him unlinked for that greatest next
                                  GRAPH* temp = root->same;
                                  while (temp)
                                  {
                                         if (temp->next.size() <= 0)
                                                temp->PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                                         temp = temp->same;
                                  }
                           }
                           return;
                     }
              }


              //case4: the lower sequence case
              if (root->b > gNode->b)
              {
                     //e.g.: 1,2,3,5,1,4,2,3,6,4,2,7,5,6,7,8,9
                     if (root->lower == NULL)
                     {
                           if (!root->IsSame())//don't put any lower for any same, sames only in right and lowers only in left
                           {
                                  root->lower = gNode;
                                  gNode->SetLower(true);
                           }
                     }
                     else
                     {
                           if (root->lower->listID < gNode->listID)//stop graph infinite lower circulation
                                  insert(root->lower, gNode, root);
                     }


                     //previous of lower msut be preserved if it has less sequence.
                     if (gNode->listID > 1 && gNode->b > ((GRAPH*)pGraphs[gNode->listID - 1])->b)
                     {
                           GRAPH& prv = *(GRAPH*)pGraphs[gNode->listID - 1];
                           prv.PushNext(gNode, maxSubSeqCount, maxSubWeight, maxSubStr);
                     }
              }


              return;


       }
}SEQUENCE;


void SubSequence()
{
       //How many test cases
       G_QWORD N = 0;
       cin >> N;
       if (N <= 0)
              return;




       for (G_QWORD x = 0; x < N; x++)
       {
              SEQUENCE seq;
              cin >> seq.len;


              if (seq.len <= 0)
                     continue;


              seq.seq = new G_QWORD[seq.len];
              for (G_QWORD s = 0; s < seq.len; s++)
                     cin >> seq.seq[s];


              seq.wei = new G_QWORD[seq.len];
              for (G_QWORD s = 0; s < seq.len; s++)
                     cin >> seq.wei[s];


              cout << seq.weight() << endl;


              delete[] seq.seq;
              delete[] seq.wei;
       }


}




int main() {
       /* Enter your code here. Read input from STDIN. Print output to STDOUT */
       SubSequence();
       return 0;
}