Category Archives: English Written

How you success your ‘gut-feeling’ estimates

Intuitively, many think of urgency and importance as first question on a problem or task that is to be done or solved. Surprisingly, complexity is often a kind of “second level certainty”.

The primarily reason to this, might be to be able to classify the task in comparison to other things that going on. But how can that be done with a fair feeling, without including complexity from start? A gut feeling of complexity can be achieved instantly. It just a matter of mindset, trust to self. .

“Do I need to break, or can I pass the crossing before light goes red?”. Agree, it’s absolutely highest urgency (but not neccessary, because one can intuitively just stop). In a situation like this, many might feel familar that existens of complexity may be present after the decision. Let say that the decision is made; I DRIVE!

Miliseconds after that, one recognize the complexity:

  • “The pedestrian crossing, puh, it was empty (as they should, lucky they..)”
  • “The surface is wet and snowy, but not slippery and icy. Puh..”
  • “No cars was in the crossing (they shouldn’t, but one never know..) Puh..”
  • “I didn’t miss any other rules here (missed a sign or so because of hurry), puh..”
  • “Oh another pedestrian crossing.. that was empty.. puh..”

The list can go on.. showing a complexity paralysis prior to decision? What if everyone did this analysis almost just the moment before the decision? A high level gut-feeling approximation of complexity. Was it too many uncertainties? Discard it!

The complexity approximation model

Now I provide a model of how to classify complexity. This is a personal vision, which I embodies in some fictive stories in an narrative way. This way to describe has limitations, but it’s a starting point to describe something that worked for me since childhood. With this step done, it will be possible for me to optimize and make the telling better over time.

The model is quite autonomous to brain. The trick is to relate conditions to the task, while keep focus on the whole. To describe this, I make it as a mental visualization with help of geometrical figures. Complexity increase is shown as what I call dimensions. The images and descriptions is there only to embody the idea of the relationships, not to make up the pictures itself in the head.

The more the brain get used to a problem, the more it automatically increase effeciency to manage complexity, depth and number of possible outcomes. While this article might appear obvious to some, I choose to share my view anyway. It’s actually very simple to practice.

Let me kick off with a summary of what’s upcoming;

Work it from left to right

From left to right, complexity is increased. On top of that, the end of article also elaborate on the superposition condition. Superposition is not about complexity, but to constrain the range or limits of outcome. Now, let’s talk about dimensions. An example of problem might be

  • To decide over an adhoc or a known strategy.
  • How to (re)cover (from) a mistakenly and unexpected information leak during a meeting.
  • Putting a not enough tested component into production because it was better than delay the release.
  • Taking the bus without planning the trip in advance.

You know better what to define as a your problem!

Let’s go on with the latter. The bus problem. The obvious means one condition: a time to reach destination. Would you think about the rush-hour? Traffic jam, more passengers to enter bus, more stays at red lights. All in all conditions that impact the risk to reach destination in time. Obviously yes, if this this trip is common to you. Taking in complex scenarios is autonomous, just about make the brain used to it. But maybe your scenario didn’t covered the queue at the food truck, so you end up 10 minutes late anyway.

More complex and much more urgent situation is if your system suddenly go down and your judgement call is needed. How to bring it back online? With less data loss, less time to recover, less damage to traceability to identify the problem, less load on supporting functions when users call in about problem. Taken into account the load on system once it’s back online again. In meanwhile, the MTTR is ticking.. (mean time to recovery)

The difference between the bus problem and the system problem is the complexity to make a decision. Also how known you are to the conditions involved in the decision. Let’s detail the bus trip scenario into a dimensional perspective. Analogies os often subject for being strange or incomplete, but bear with me. The “space” in the models, (length) is to declare the aspect or problem, for instance time (bus trip measured in time).

Zero dimensional

According to geometry, zero dimension can be seen as a simple dot. This one won’t get anywhere. There is no arguments and no considerations. Just a single condition.

The condition who act here, is not in any way capable to evaluate it’s condition within it’s dimension space, nor higher. It does simply not now how it relate to, or are located within the one dimensional space.

The lazy brain might sometimes just want to know where the destination is. Not how to get there – why and when. I would call it a non-dimensional though.

One dimensional

Going from the zero dimension to the first dimension is kicking off most of the day-by-day-life to the scene. Let’s demonstrate the metamorphos by draw a stright line.

A line like this, symbolise one dimension very well and how a zero dimension object is positioned into it.

Note that the line (boundarie of the dimension) is endless, until you drawed one. Also the zero dimensional object have unlimited amount of possibilities where it’s positioned along the line. The only way to know where, is actually draw the dot. Alternatively add a rule that limit the possibilities. For instance “only evenly divisible with 5”.

Observe it from another way, this dimension is aware of the presence of an zero dimensional object. Able to measure, analyse and understand it. If first dimension want to, it might tell the zero dimensional object about it’s condition.

This is quite familar as how we frame the options in IT architecture to limit something unknown. Or in software development, where we have a master class that are owners of sub classes. At least those constructed SOLID.

Two dimensional

As you might guess already, the second dimension is adding an axis and connect the lines. You will get what we often refer to as a symbol.

At this stage, we abstract the zero dimension completely into the 1 dimension. We now have a similar issue. The 1 dimension object do have endless number of possibilities to connect between the lines in the two dimensional object.

There is no way to know where the start and end is positioned, before you actually draw them or restrict them with conditions where they are permitted.

Three dimensional

Up to and including this dimension, we are close to the capability of human brain to percept a model. However, when adding the third dimension on a condition, thing’s can go madly complex for a human brain to perceive. No surprises that AI is in such a hype currently. The idea is popular over the possibility to identify questions and answers that easily pass borders human brain so far never could pass.

But three dimensions is far away from AI. This is just a condition that change by a third factor. While your architecture meet a certain quality attribute with a grade 1 to 5 (1 dimension), it might have an decrease by causality against another quality attribute (2 dimensions). While many analysis stop here, there is obviously normally another factor. For instance time, cpu utilization from zero to 100. Or why not approaching or receding a hurdle rate (business term).

As one say with speed of light in vacuum. The closer to speed of light in vacuum we reach, the endlessly more heavy the materia is that refuse the light for being faster.

4 dimensions

Aware that I hereby stretching the limits of our vision, I won’t try to draw a spooky 4d picture. Instead i try to visualise the practical result of how I use to think at this stage. A statistican would or data scientists, feel as being at home!

What happen to me here is that a given three dimensional scenario do have – at any chosen position in the possible outcomes – an unlimited possibilities in turn (seen in blue above). See the green thick dot above, that detail a chosen position.

The blue box can be 1, 2 or 3 dimensional, though a three dimensional is described above. So, if the picture would be true – it would need to contain as many blue cubes with the red dotted line changing so many times the set of rules make it allowed. For instance, from 0 seconds respond time to 180 seconds respond time (i.e. an constraint or an accepted interval stated per requirement). The five “corners” in each line above might correspond to five statements that can change interpendently, but each one within the given rules (i.e. the respond time is one sample of rule might be a constraint).

Much easier to make up mentally, right?

You might see here how the mental visualization of such scenarios might be very effective to make quick decisions. If you are known to the requirements or constraints of the system, you can easy consider forward or reverse scenarios to rule in, or out.

An example of this might be a regular reverse troubleshooting scenario where you might know that the probability of a respond time might be high, caused by congestion at the API x (or why not a network link), resulting in a inconsistent behavior at the GUI. Derives further up the parameters or dimensions; a number of users that make interact with the GUI in a different way.

Maybe your experience says that once this behavior occur, it most likely cause follow issues. You might very quickly be able to ensure that a connecting (or the component itself) increase it’s logging and order a, reset or cleaning of it.

Not so strange after all, right? Respond time is a level of probability, which level make different impact on the congestion. The congestion itself might appear of different reasons and have different impact. You might now that completely stocked up congestion is more likely network then API. The customer behavior change is just a matter of finding alternative sources to ensure or strengthen the validity to your decision, if needed.

Stateless (or superposition)

Quantum mechanics talks about superposition, more close to compare than call a 4th dimension. Old science usually refer spacetime as 4th dimension. While that is not true,
(more to come).

Wrapping up at the end of an assignment

Again, the time has come for me to wrap up for an long term assignment. Currently it’s over two years to be gathered. In my age and with my background, i can be honest to say that this situation is nowadays quite familiar.

Do you recognize yourself from situations where you are about to wrap up, and have either nobody reach out to or everyone hunting you for as many reasons? Let’s go ahead and let me describe four models of personas and way to go, based from my experience.

Wrapping up the end of an assignment might be the most critical compared to any other activity during an assignment. Even more then meeting the expected deliverable criteria. Because, simply put, if deliverable was not met – it’s unlikely that a new trust built upon your delivery. On top of that, this is the phase where your activities might make the heaviest footprint. A confidence test of the relationship between the customer and the company you represent. But also as a person.

While every exit is unique in a number of ways, there is a pattern where some activities is shared from time to time. I try to share some experiences out from this. Describing it in models and anti patterns to try out.

Strictly speaking, a lack of quality in this phase can cost service and even health to people. Note to shelf: This primarily concern assignments and roles where one have ben a key resource in a particular way. For instance a broad (or specialized) resource over time. Projects driven with well defined deliverables with defined roles, organized budget and release dates often have this handover and transition thing leveraged within standard risk measurement.

I write often, because that would be a lie to say that it always like that.

Anti pattern: Spend all effort on one card

Once you gather the cards to end the assignment, loneliness might be present. All of a sudden, a manager approach you with an “You know what you need to do”-face while all the rest scream after a all kind of attention for something to get filed up. You are left to spend whole days or week to gather data, meetings and answer questions on loose threads. The kind of questions that all of a sudden appear, can be quite interesting.

Roughly I model the ending as a divided share between four areas. “the documenter“, the “the worrier“, “the care less” and finally “the vetger“.

While all of them have all rights to mitigate each of their risks, one of them might appear more convincing or even dominating to you. Eventually one is your manager or someone you look up to. I.e. established friendship. Ending up in focusing too much on just one of those, might direct you into this anti pattern.

Note: “Careless” have a negative tone, but the point is opposite. There is a number of fellows that just reply “no problem Jonas, we have all we need”. While this is a good sign, it’s sometime to light wind round. To feel sure, I used to face this situation for instance by ask for an activity that should be covered by that statement.

Anti pattern: Underestimate culture impact

Two views of this. One is how the role in the organisation do impact communication network. Beside that, the change or impact in team culture or spirit, once you leave.

For the former, relationships to other stakeholders. Often there is a lack of persistence of the communication network, when key roles disappear. See this sample model.

Seen above, some resources are obviously handed over. But other lost. A number of formal, informal or less related connections and relationships is usually forgotten or underestimated. Another challenge in the handover is the manageability for the new resource to establish and keep the connections. Often personal chemistry create informal relationships, which might not work the same on your successor.

Other times the connections are there, but due to prioritization or lack of demand to keep in touch for a long time, make them cold and maybe even non existent. Let say a developer had a very though time investigating network latency and therefore you established good relations to network operating architects or managers. A useful contact that you add to the stakeholder list, but maybe not critical to the application once the issue is sorted out.

On the latter view, your own impact on the spirit and culture. Might sounds unrelated to work. But it’s directly related to the footprint you leave. If the culture is such that some person(s) get used to your presence, chances are that the mood or emotions of at least one, is moved. Personally I see it as a tough thing to identify, I often being surprised who was really care, or not at all.

Anti pattern: Try to model the phase out

This anti pattern might sound like it contradict myself. However it’s a matter of what you try to model. It might be an tempting idea to model the “how to decom myself”-process. Similar threats and worries can be identified across, but the way they are impacted is different from time to time. An acceptable condition here would be a very high level, just to demonstrate how you are going to take care of it. More detailed and with dependencies added, is clearly an anti pattern. Increased risk to focus to much on a single point, loosing other aspects.

I can find myself make up a list of areas. Then i try to layer them out in a matrix. It might be a number of roles (or actors) and a number of concerns. Then I fill in in which way each role hits the given concern. After that, each cell in the matrix can get measured and followed up (in case all of this stuff is needed to ensure handover).

  • Does it need to be documented? How? Who is the receiver? Would it be 80/20 perfectness, or just some guiding titles?
  • If “high level”, is it really helpful? maybe this “high level” does not add anything at all? So I might just be less lazy and provide some more depth?
  • Do it need to repeat? Should the information be backed up and split into two. A more in depth to the most impacted and a more high level and broad, as supporting info to another related person?

The “reference list”

Assuming that there exist any handover material at all, one highly valuable is a high level document that cover areas, processes and stakeholders. And, briefly, areas that has ben initiated or engaged during the time. If enough time can be spent, also it’s valuable to document upfront changes and challenges with some “gut feeling”. Having SWOT in mind might add some value.

Even if it’s not directly related to the primary purpose of the project or system scope. As often are the case, this is ultimately simple as a Excel sheet with few simple tables stating contact and a reason.

Stakeholder lists is naturally short lived, but they do provide an valuable view about the blood stream of your work. A piece of organizational evolution can be tracked, once the gap is identified between now and the time where it was produced.

Wrap Up

In the end, it might be up to your professional acting of how to leverage and motivate the effort spent on each of those three areas, that will set how good you wrap everything up.

Some basics was known already by Einstein, that drive IT as of today

The idea to this post came on a completely normal tuesday evening and I am reading an old book I found in my book case. The book is a small summary of Einsteins General and Special theory on approximately 150 pages.

This is for sure a book that take some time to read. Not because it’s complicated, oppositely it’s quite simple. But it ´surprise me how much of the mechanisms in universe he describe, that actually is a kind of building stones for the humanitys unique ability to percept, learn and do better. Problems that can take another mamal dozen of generations to learn, by selective inheritance of genome (because weakness is naturally passed out) can be learned overnight by a human.

Okey, thank’s for the anthropology – What’s the IT deal in this?

I want you to read this paragraph in the book. This book is in Swedish, so I will give it word by word in Swedish. Than later amateur-translate it myself to english.

This is just one of many samples in this book, where this particular one is more simple to relate to in everyday events.

In this very moment, I want to emphasize Einsteins focus on that perception of the events lie in how different actors percept the same object. And what differences they recognize. On top of this, the different mechanisms and theories he need to invoke and describe to proof what each actor recognize.

So it came clear for me in parallel to this reading, and the reason to this post: This is exactly what defining and documenting an IT architecture is about. 

Let’s take some similaries. IT architecture

  • is a moving object in it’s space
  • have different actors
  • properties that have different impact based on actors (and the changes in it’s space)

Conclusions from the perspectives that Einstein are keen of

  • You take a viewpoint, for instance Kruchten 4+1 and define useful perspectives for the audience.
  • On the perspectives, you define views. The sample from Einstain define two perspectives, one is the pedestrian. The other is yourself, looking from the train wagon.
  • The views is those who require Einstein become scientific in his answer. Here is also where our competence make most sense.
    • To describe what happen.
    • Why it happen.
    • What objects is related.
    • Why are they related.
    • Is there other processes or views adjacent to this?
    • But not mentioned here, that have impact?
    • ..and so on.

I would not use this post to convince you that Einstein discovered the methodology of Viewpoints. It’s just a populistic way for me to tell you the importance and impact that viewpoints have. IT architecture could actually be seen as an organism, hosted as technology but driven by human. Some mechanisms is simply related to how humanity is hosting earth, and earth is driven by laws of universe,

I also want to to point out how Einsteins mastering the super clear viewpoint – perspective – views methodology all over the book. It has help change the view and understanding of the world building blocks for hundreds of millions of peoples over the world.

Can viewpoints together with such clear views change the understading for hundreds of thousends of IT systems around the world? Of course, yes! and yes again. It’s already doing so by some, for the rest: Let’s study! Once you master the methodology and have experience to define relevant viewpoints, it will be much easier to concentrate on how to provide the best scientific (or exact) fact to the views.

Thanks.

And some links;

Take a help by IASA Globals evolving of Kruchter:

SSA – Views, Viewpoints and Perspectives

Context Describes the relationships, dependencies, and interactions between the system and its environment (the people, systems, and external entities with which it interacts). Many architecture descriptions focus on views that model the system’s internal structures, data elements, interactions, and operation.

Einsteins General and the Gpecial theory:

Relativity: The Special and the General Theory – Wikipedia

It was first published in German in 1916 and later translated into English in 1920. It is divided into 3 parts, the first dealing with special relativity, the second dealing with general relativity and the third dealing with considerations on the universe as a whole.

 

We are in mourning of MsPaint – But remember the destiny of File manager for 20 years ago?

Loud barks does echo around the Internet regarding the discontinuation of MS Paint in Windows.

Somebody here remember the voices when MS did the same with “File Manager” (winfile.exe) into the latest versions of Windows 3.1 and the server Windows NT 3.5 era? The loss of WinFile was a catastroph in productivity until Total Commander save the world.

But that’s also a part of history and today only dinosaurs know about it. Now let’s mourn about Ms Paint and make some guess about what will be discontinued at about 20 years, at year 2037. What is your best guess?

Read more at Windows.com >>

MS Paint is here to stay

MS Paint fans rejoice: The original art app isn’t going anywhere – except to the Windows Store for free!

On top of this, Microsoft comment that Paint3D will be the next. Even though the titles, classic paint move out from Windows and need to be fetched from Windows Store.

Little better destiny than File Manager (that faced a large number of technical limitations towards modern operating systems.

https://www.theregister.co.uk/2017/07/25/microsoft_paint_on_windows_store/

Remember the invaluable software design patterns? It’s debt time

I want you to re-visit a time of my, and perhaps yours, when i was obsessed of Gang of Four, GoF. The super (?) popular collection of design patterns for OOP programmers to follow, when develop solutions and applications.

  • Developer role has evolved
  • Separation of programmers and strategists
  • Using patterns to communicate over principals
  • Identify value by investigate dependencies

GoF was a collection of programming design patterns that could be used to solve many common problems in object oriented software development. So much value they bring to the OOP style development.

In addition to the man hours I spent as a developer to read and learn the patterns, I also spent countless of hours to develop and implement patterns during the years. Like a bible, both in professional and in spare time projects. I was never challenge the importance of the patterns seriously, just followed it slavishly.

A sign not being enough senior. Or perhaps, as I would say today, not being questioned enough. As the programmer, knowing my patterns, I was not questioned what I said or did. Instead. my agitators were at StackExchange. I challenged my implementations and worked close to StackOverflow. Of course I was boiled with razor blades. But I got skilled, learned my lessons.

But the patterns were still bible, even on StackOverflow.

Within time I learned to look back to what I really did and also how I did it. Increased the holistic view and a whole thinking perspective. What did those patterns really mean? And I am still not sure why this view happened to me. Has the role of programmer evolved lately, where it’s more expected by the role to demonstrate the value of the code to strategists? and business? Sometimes the roles are (and should, depending on the organization and assignment) be mixed. Mixed as in strategist and developer is the same resource (for instance in a very high skilled expert resource. While this separate topic might be interesting, we re-connect focus to the patterns.

OOP Design patterns back at the time

Patterns was back at the times quite easy to demonstrate, because problem was solved with one or few tools and frameworks. For instance JDK/JRE or .Net C#. Collections as GoF cover most scenarios, so not follow one pattern was strange. But the missing important question to the patterns was: how the pattern was implemented. Not much questionaries’ or analysis methods to confirm that the development was valuable. Back at the times, I didn’t need to provide proof that the implementation will be measured in value. For sure, I would be questioned in terms of SOLID or even OOP, and boiled or blamed for every mistake. But I would for certain not be questioned how I can ensure business not losing money when CTO requested a new integration.

That was the good thing for me, bad thing for business. You know already why that was bad for everyone? Including the customers? Because it would for sure create a Business (or sales) vs IT (technology) culture.

For a strategist, the segregation and understanding of dependency between the value that objects hold or enable, is way more important. Not to mention the value that might be loss in one part of the system, if another part will be in trouble. I will demonstrate below. Let say I back at the time (programmer following design patterns) wanted to describe for CTO a pattern I built and how good it is, so I created models, because it obviously hard to traverse through code, classes, namespaces and technologies.

With this model, it looks all clean and good. But the 10-point question is: how does it stand if it’s described in value to the business? Sometime later, the CTO or business may ask to move a new logic in addition to this nicely described adapter pattern. Or may ask what amount of money we lose if XmlAdapter inexpertly stop work.

Impact in terms of (any measurable) value

By look into the model, it’s easy to trace the impact by disconnect a component. The XmlAdapter seem to be just to cut off. Assuming that the model is true, that this assumption will probably cause big problem. One might ask what means with ”Value”. The context is probably changeable over trends and times. The value might be monetary, number of deployments required, or components impacted.

In current generation of development of technology, it’s a losing concept to just throw a design pattern over the table and then implement it. A change need traceability, explanation from relevant views and be intentional. Documenting, structuring, adding traceability and communicate to stakeholders, confirming and have signoff might take more time then the actual development. But that’s the point. The result will be systems that are understood by audience, stable (or known reasons to not be stable) operational conditions and development, change and release that have a process.

How can a CTO plan for emerging trends or match the business rapidly changes, if the CTO don’t know the technology significance, between planning and deployment? Shouldn’t CTO know where the changes require re-factoring of half the codebase or just a minor?

How do we ask correct questions for this? and who care about the value? Simple answer: Make sure that there is a strategist role in the project, company or department. A quite simple way to challenge the valuation in the earlier sample, starting in a company from zero, could be like this from CTO:

“We want to introduce a manufacturer that produce nano sensors. It will for sure require a new adapter, but should behave exactly the same against TheDataHub. The difference I think of, is to make sure that hardware identification have space for 256 characters. See sample model”

We can also see the level of understanding from the CTO, which is really important. And that the CTO think he or she have an idea about what need to be done. According to the pattern we followed in an earlier model, a dummy-adoptee should be able to implement with a kick. Right? The tricky part should be to attach the functionality of connecting nano sensors, preferrably done in a separate space attached to the adapter.

“Cool, just relax – i code it and return to you when it’s done!”

The true developer (looking to myself back in time), would stick with this comment and start develop. I tell my CTO that he can relax while I realize the model with the new adoptee for nano sensors. So now the time for truth has come. Will it be so easy? In flexibility to technology changes to the business, do you think the CTO does float in the land of unawareness? On his hands there is most likely some external expectations from a manufacturer, that also might have development to do, to meet the CTO’s expectations.

Let us jump back to the view of developer. We must today be able to questionate models and code implementation in more ways. It’s simply not enough to just be provided a simple view. Once we are provided several (relevant for the situation) views, the strategist can ensure that the change or new capability get the correct attention and or resources.

Assume that the following model was more close to the actual implementation of the famous pattern? Which is a completely possible truth scenario:

An experienced developer can quickly see that the CTO or a architect, have some problem if a new emerging technology would be implemented here.

The strategist would have need to ask for some views that are not completely code related. For instance;

  • How would authentications touch the components.
  • What infrastructure objects exists? and their relations (for instance Database on a separate server?)
  • How do backoffice/admin connect to the components?
  • How many kind of readable objects
  • Each objects frequency is it, currently?
  • How does the frequency and degree of numbers of objects relate to and between the adapters?

… and so on. The pattern here is questions that may have significance on the design.

It does not say “we don’t trust developers”, it’s more like say “there is a nurse between the client and the doctor”

Value in adding an strategist

In this fictive example, it’s clear that we have a responsibility to support the way the application is meant to provide service to the users. Not clear is that we should leave all that responsibility in hands of a developer, with nothing more between CTO and developer. It does not say “we don’t trust developers”, it’s more like say “there is a nurse between the client and the doctor”. With very good reason. The view and concern is different. Also the tasks. Is simply not fair to place all responsibility in hands of the developer (or require the CTO to have very developer focused skills).

Let’s take some of those example questions to the model above, not much is answered. Right? We just see that the implementation really is an adapter pattern. Points earned there. But it’s more a adapter style, rather than a pattern. This pattern implementation will for sure cost a lot to separate into modules.

We can quickly see some required improvements, but that’s not the point. We already see that the lack of clear abstraction between datalayer and adapters require a very ill smell to stability and deployment, just to increase the length of the string column object name identifier. Also that the authentication is really in shadows.

Having a strategist, for instance an architect, architect alike CTO or analyst that know to ask the right questions to provide a bridge between strategic need and technical requirement, can really save or increase the room for improvements you have in the roadmap.

The days where the heavy / senior developer can do everything must be passed to history. That’s not to, as already emphasized, question the skills or understanding for the role, more because the concerns are different. The skilled heavy developer resources can deliver both strategy and code, but be aware that they provide value within different principals — and make a way to be clear about it. If not, this resource (you?) might consider if it’s really a developer, or really a strategist and continue with one of them, as the homezone.

/Jonas

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.

 

Read a recent page that list about the GoF patterns hers:

The 23 Gang of Four Design Patterns .. Revisited

Technical musings on Win8, mobile (iOS / android / WP7) and WCF The Gang of Four (GoF)(from Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley Professional Computing Series, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides). These 23 GoF patterns are generally considered the foundation for all other patterns.

Analysing capacity +-15 deviation in compare to +-0 deviation

Would this model make sense as a visualization of how analytical depth and skill could differ between individuals, beyond education and experience?

When would the #2 archetype fit in a professional work, where #1 does not? And oppositely. Could (or even should) everyone try to be capable to fit in #2?

Readable full size picture: Full size image

Each individual do have natural degree of talent in versatile and parallel approaching of a problem (& solving, which is a ability itself). Communities do often talk about skills, but rarely about how intelligence, the in born ability measured as IQ, do impact how performing the skill is improved or not.

Limiting to +-15 standard deviations from normal IQ, it include approx 94% of the population, people that you meet daily and probably at work (which eventually is more diverse, based on your site).

How one at mid right, in compare to mid left, of this range approaching a problem, daily tasks and gather decision is very different. But may look the same from outside or question survey.

/Jonas

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.

 

Snapshot from a mentorship on optimize how business and IT to approach each other

There is a really really great opportunity to mentor and help one in thought of something. Either direct advice or as a sounding board to provide wider understanding, another view or new insights. I also find the self-recognition effect very amusing and the challenge to transform what I apply myself into written or re-distribute like into this article. Okey, enough gaiety and back to the topic! This time the mentoring was about this (classic) topic;

Business are bad to describe to IT what they want. How to make them understand what we need?

As you can see, I was immediately pushed into a classic round: Business vs IT

Anyone with a sense for humor do understand the point of the model. However, there is usually a gap between IT and business. Intentionally and by the book, one can think this is a junior question and following is just a skill curve. But fact is there, described in different job descriptions, in pillars of an IT architect, as well as a program manager or an project lead. Even whole programs is defined and driven to reduce this symptom. There are a lot of value in agile IT waiting for this gap to reduce. Every discussion like this, is little step closer to provide this value, to users, employees and organisations.

The best part of it all, it quite simple to address and mitigate. Just by mindset and patience.

This time I was given a chance to emphasize Engagement as the way forward. Opposite to contractualization or spending time to claim, restrict and explain definitions. Of course a documented baseline can (and should) co-exist, such as “how business should define a requirement”. But as soon as that tool is used just to “throw over the desk”, there will be a “how IT should meet our requirements” and then we can start round 2 and everyone fail.

Instead, we consider engagement. The key is to build dynamic and soft touching points, reflecting the situation and scope. Either the method for the requirement delivery from requester (business) to receiving IT (developer), the content to be discussed, documented as a test case or similar. Based on the change management culture in the department(s).

The shared forum should be technology agnostic and driven by the need to complete both sides perspective of the requirement.

I like to ensure generic approach wherever possible, to enable re-use of methods and thinking. Not being bounded to a product or vendor, not require the other side interact with specific tools except for general office tools. The “engagement” itself may be a email chain, a phone conference, meeting over desk, shared screen viewing a Kanban board, or all of them. The point is to have the together-approach when decide and agree on how recognition meet requirements.

The deliverable from the engagement into development would be very product specific. But the requirements or ideas need to be iterated through a templated assessment, a set of questions like the next model below. It’s easier to mention “what to define here, in the acceptance criteria?” out from a defined template.

If a item does not apply, just mention it does not apply and pass to next (but do mention it, don’t just remove the question). If an item can’t come to conclusion, just document that and return to it next time (classic backlog).

This much, was approximately what we was able to conclude during our 30 minutes session.

Finally transformed into the point of increased value

Beside this, it’s open to take next step with connecting continuous integration and continuous deployment to the requirements and automated reports for test acceptance approval by business. Let’s that be a later story. Or your story.. =).

/Jonas
@ImmerseIt

Is this how Russian through IT promote Trump to non-US citizen

Russian hackers appear to think out of the box, when provide and distribute messages to inter-wide public.

Webpage statistics

It’s quite worrying to read about the modern time attempts to fabricate, misleading information and news. It’s also worrying to see that organizations or hackers try to use weak spots in the TCP and UDP protocol to provide messages to the world, not only to trick resources that consume it but also to its design. The provided screenshot above is taken from two different web pages and we can see completely unnormal activities here. For this particular presence, there is a language-type header provided in the TCP used by client and server to determine for instance web language capacity and preferrables. Short version. As a side note to the article, i have no fact to proof russia as a source, just that they shows up to be. An interesting situation when talk about false inforlation. So lets continue from that stand point.

This header is fully possible to amend and do stuff with, with just a normal development knowledge. It’s not even considered “hacking” to amend those fields to whatever . What we see here is new kind of grep in a information war. Hey, these headers are there to be used for customization, to make webpages and clients customizable to fit the style that the clients want. And for server to load and balance the correct resources.

But unfornately it can also be used as an alternative way to spread and provide propaganda or other kind of information. It’s kind of stright forward since long time a (by hackers, mostly) useful channel to spread information “under the radar”. During the years, several weakness has ben exploited in servers and clients by malformatting those headers and how they looks like. And also fixed with hundreds of patches in all kind of layers and applications. The issues exist and are fully possible because computer systems traditionally are built upon “keeping good sense is win win”, so owners and developers does develop up to application stability. Security has often ben the black sheep associated with uneccessary high cost because of: “do we need it to work? Does it work anyway?” No, yes..

Last years, we come to see that the reduce in cost did not dissappear. It was just moved forward in time and classified as “security threath” instead of being included in development sprints from beginning. If any good out of it, is that we now have new IT professional titles such as “Security architect” “Security specialist” and so on. Those have now a job for a life time.

To be honest. More annoying then worrying is that the worlds most used communication method is so depending, on transport level, on just two transfer protocols. (I put a BUT on this comment, to a later posting). Both of them relying so hard on the sending- and receiving application for their security. I want to mention the link and hardware layer, but that chapter will dig us into a black mall of mud open for exploits and to spread desinformation.

What do you think we should do in nearest future? SSL, two way factor encryption is just a way to hide information from the wires and waves. If its too efficient, we will start to worry about other war related challenges. But the core of the issue is most close to a solution; The information that can be sent, is constructed in application level developed by programmers. It’s received by applications developed by programmers. Programmers can be hirest, have their own agendas or other purpose that does not follow the purpose with their work. Employees that knowingly leak information to foreign purposes. Software security is hardly of help here. Applications can be patched. Rely less on the TCP data provided in the headers, have better mechanisms on how information is transferred to reduce the risk of being hacked. In this Machine Learning, Deep Learning and AI algorithm days, we also need to take much more care about how much descriptive meta data applications do provide. Also – how much descriptive meta data key positions within infrastructure and application owner and administrator level can leak. Accidantely or deliberately.

I see a year 2017 where most developers will stand in front of the questions:

  • How do you secure your code?
  • What is security for you?
  • What does the word “responsibility” mean to you, when you produce safe code?
    • not in terms of memory leak or machine safe: Means information safe
  • Have the system you worked in ben hacked?
  • Have you cleaned up or traced activities from a hack attempt?

Im also almost 100% sure that we soon see insurance firms add additional services for costs that can be related to security threats. Both private, companys or whatever kind of customers.