Author Archives: brainart

How you success your ‘gut-feeling’ estimates

Intuitively, many think of urgency and importance as first question on a problem or task that is to be done or solved. Surprisingly, complexity is often a kind of “second level certainty”.

The primarily reason to this, might be to be able to classify the task in comparison to other things that going on. But how can that be done with a fair feeling, without including complexity from start? A gut feeling of complexity can be achieved instantly. It just a matter of mindset, trust to self. .

“Do I need to break, or can I pass the crossing before light goes red?”. Agree, it’s absolutely highest urgency (but not neccessary, because one can intuitively just stop). In a situation like this, many might feel familar that existens of complexity may be present after the decision. Let say that the decision is made; I DRIVE!

Miliseconds after that, one recognize the complexity:

  • “The pedestrian crossing, puh, it was empty (as they should, lucky they..)”
  • “The surface is wet and snowy, but not slippery and icy. Puh..”
  • “No cars was in the crossing (they shouldn’t, but one never know..) Puh..”
  • “I didn’t miss any other rules here (missed a sign or so because of hurry), puh..”
  • “Oh another pedestrian crossing.. that was empty.. puh..”

The list can go on.. showing a complexity paralysis prior to decision? What if everyone did this analysis almost just the moment before the decision? A high level gut-feeling approximation of complexity. Was it too many uncertainties? Discard it!

The complexity approximation model

Now I provide a model of how to classify complexity. This is a personal vision, which I embodies in some fictive stories in an narrative way. This way to describe has limitations, but it’s a starting point to describe something that worked for me since childhood. With this step done, it will be possible for me to optimize and make the telling better over time.

The model is quite autonomous to brain. The trick is to relate conditions to the task, while keep focus on the whole. To describe this, I make it as a mental visualization with help of geometrical figures. Complexity increase is shown as what I call dimensions. The images and descriptions is there only to embody the idea of the relationships, not to make up the pictures itself in the head.

The more the brain get used to a problem, the more it automatically increase effeciency to manage complexity, depth and number of possible outcomes. While this article might appear obvious to some, I choose to share my view anyway. It’s actually very simple to practice.

Let me kick off with a summary of what’s upcoming;

Work it from left to right

From left to right, complexity is increased. On top of that, the end of article also elaborate on the superposition condition. Superposition is not about complexity, but to constrain the range or limits of outcome. Now, let’s talk about dimensions. An example of problem might be

  • To decide over an adhoc or a known strategy.
  • How to (re)cover (from) a mistakenly and unexpected information leak during a meeting.
  • Putting a not enough tested component into production because it was better than delay the release.
  • Taking the bus without planning the trip in advance.

You know better what to define as a your problem!

Let’s go on with the latter. The bus problem. The obvious means one condition: a time to reach destination. Would you think about the rush-hour? Traffic jam, more passengers to enter bus, more stays at red lights. All in all conditions that impact the risk to reach destination in time. Obviously yes, if this this trip is common to you. Taking in complex scenarios is autonomous, just about make the brain used to it. But maybe your scenario didn’t covered the queue at the food truck, so you end up 10 minutes late anyway.

More complex and much more urgent situation is if your system suddenly go down and your judgement call is needed. How to bring it back online? With less data loss, less time to recover, less damage to traceability to identify the problem, less load on supporting functions when users call in about problem. Taken into account the load on system once it’s back online again. In meanwhile, the MTTR is ticking.. (mean time to recovery)

The difference between the bus problem and the system problem is the complexity to make a decision. Also how known you are to the conditions involved in the decision. Let’s detail the bus trip scenario into a dimensional perspective. Analogies os often subject for being strange or incomplete, but bear with me. The “space” in the models, (length) is to declare the aspect or problem, for instance time (bus trip measured in time).

Zero dimensional

According to geometry, zero dimension can be seen as a simple dot. This one won’t get anywhere. There is no arguments and no considerations. Just a single condition.

The condition who act here, is not in any way capable to evaluate it’s condition within it’s dimension space, nor higher. It does simply not now how it relate to, or are located within the one dimensional space.

The lazy brain might sometimes just want to know where the destination is. Not how to get there – why and when. I would call it a non-dimensional though.

One dimensional

Going from the zero dimension to the first dimension is kicking off most of the day-by-day-life to the scene. Let’s demonstrate the metamorphos by draw a stright line.

A line like this, symbolise one dimension very well and how a zero dimension object is positioned into it.

Note that the line (boundarie of the dimension) is endless, until you drawed one. Also the zero dimensional object have unlimited amount of possibilities where it’s positioned along the line. The only way to know where, is actually draw the dot. Alternatively add a rule that limit the possibilities. For instance “only evenly divisible with 5”.

Observe it from another way, this dimension is aware of the presence of an zero dimensional object. Able to measure, analyse and understand it. If first dimension want to, it might tell the zero dimensional object about it’s condition.

This is quite familar as how we frame the options in IT architecture to limit something unknown. Or in software development, where we have a master class that are owners of sub classes. At least those constructed SOLID.

Two dimensional

As you might guess already, the second dimension is adding an axis and connect the lines. You will get what we often refer to as a symbol.

At this stage, we abstract the zero dimension completely into the 1 dimension. We now have a similar issue. The 1 dimension object do have endless number of possibilities to connect between the lines in the two dimensional object.

There is no way to know where the start and end is positioned, before you actually draw them or restrict them with conditions where they are permitted.

Three dimensional

Up to and including this dimension, we are close to the capability of human brain to percept a model. However, when adding the third dimension on a condition, thing’s can go madly complex for a human brain to perceive. No surprises that AI is in such a hype currently. The idea is popular over the possibility to identify questions and answers that easily pass borders human brain so far never could pass.

But three dimensions is far away from AI. This is just a condition that change by a third factor. While your architecture meet a certain quality attribute with a grade 1 to 5 (1 dimension), it might have an decrease by causality against another quality attribute (2 dimensions). While many analysis stop here, there is obviously normally another factor. For instance time, cpu utilization from zero to 100. Or why not approaching or receding a hurdle rate (business term).

As one say with speed of light in vacuum. The closer to speed of light in vacuum we reach, the endlessly more heavy the materia is that refuse the light for being faster.

4 dimensions

Aware that I hereby stretching the limits of our vision, I won’t try to draw a spooky 4d picture. Instead i try to visualise the practical result of how I use to think at this stage. A statistican would or data scientists, feel as being at home!

What happen to me here is that a given three dimensional scenario do have – at any chosen position in the possible outcomes – an unlimited possibilities in turn (seen in blue above). See the green thick dot above, that detail a chosen position.

The blue box can be 1, 2 or 3 dimensional, though a three dimensional is described above. So, if the picture would be true – it would need to contain as many blue cubes with the red dotted line changing so many times the set of rules make it allowed. For instance, from 0 seconds respond time to 180 seconds respond time (i.e. an constraint or an accepted interval stated per requirement). The five “corners” in each line above might correspond to five statements that can change interpendently, but each one within the given rules (i.e. the respond time is one sample of rule might be a constraint).

Much easier to make up mentally, right?

You might see here how the mental visualization of such scenarios might be very effective to make quick decisions. If you are known to the requirements or constraints of the system, you can easy consider forward or reverse scenarios to rule in, or out.

An example of this might be a regular reverse troubleshooting scenario where you might know that the probability of a respond time might be high, caused by congestion at the API x (or why not a network link), resulting in a inconsistent behavior at the GUI. Derives further up the parameters or dimensions; a number of users that make interact with the GUI in a different way.

Maybe your experience says that once this behavior occur, it most likely cause follow issues. You might very quickly be able to ensure that a connecting (or the component itself) increase it’s logging and order a, reset or cleaning of it.

Not so strange after all, right? Respond time is a level of probability, which level make different impact on the congestion. The congestion itself might appear of different reasons and have different impact. You might now that completely stocked up congestion is more likely network then API. The customer behavior change is just a matter of finding alternative sources to ensure or strengthen the validity to your decision, if needed.

Stateless (or superposition)

Quantum mechanics talks about superposition, more close to compare than call a 4th dimension. Old science usually refer spacetime as 4th dimension. While that is not true,
(more to come).

Wrapping up at the end of an assignment

Again, the time has come for me to wrap up for an long term assignment. Currently it’s over two years to be gathered. In my age and with my background, i can be honest to say that this situation is nowadays quite familiar.

Do you recognize yourself from situations where you are about to wrap up, and have either nobody reach out to or everyone hunting you for as many reasons? Let’s go ahead and let me describe four models of personas and way to go, based from my experience.

Wrapping up the end of an assignment might be the most critical compared to any other activity during an assignment. Even more then meeting the expected deliverable criteria. Because, simply put, if deliverable was not met – it’s unlikely that a new trust built upon your delivery. On top of that, this is the phase where your activities might make the heaviest footprint. A confidence test of the relationship between the customer and the company you represent. But also as a person.

While every exit is unique in a number of ways, there is a pattern where some activities is shared from time to time. I try to share some experiences out from this. Describing it in models and anti patterns to try out.

Strictly speaking, a lack of quality in this phase can cost service and even health to people. Note to shelf: This primarily concern assignments and roles where one have ben a key resource in a particular way. For instance a broad (or specialized) resource over time. Projects driven with well defined deliverables with defined roles, organized budget and release dates often have this handover and transition thing leveraged within standard risk measurement.

I write often, because that would be a lie to say that it always like that.

Anti pattern: Spend all effort on one card

Once you gather the cards to end the assignment, loneliness might be present. All of a sudden, a manager approach you with an “You know what you need to do”-face while all the rest scream after a all kind of attention for something to get filed up. You are left to spend whole days or week to gather data, meetings and answer questions on loose threads. The kind of questions that all of a sudden appear, can be quite interesting.

Roughly I model the ending as a divided share between four areas. “the documenter“, the “the worrier“, “the care less” and finally “the vetger“.

While all of them have all rights to mitigate each of their risks, one of them might appear more convincing or even dominating to you. Eventually one is your manager or someone you look up to. I.e. established friendship. Ending up in focusing too much on just one of those, might direct you into this anti pattern.

Note: “Careless” have a negative tone, but the point is opposite. There is a number of fellows that just reply “no problem Jonas, we have all we need”. While this is a good sign, it’s sometime to light wind round. To feel sure, I used to face this situation for instance by ask for an activity that should be covered by that statement.

Anti pattern: Underestimate culture impact

Two views of this. One is how the role in the organisation do impact communication network. Beside that, the change or impact in team culture or spirit, once you leave.

For the former, relationships to other stakeholders. Often there is a lack of persistence of the communication network, when key roles disappear. See this sample model.

Seen above, some resources are obviously handed over. But other lost. A number of formal, informal or less related connections and relationships is usually forgotten or underestimated. Another challenge in the handover is the manageability for the new resource to establish and keep the connections. Often personal chemistry create informal relationships, which might not work the same on your successor.

Other times the connections are there, but due to prioritization or lack of demand to keep in touch for a long time, make them cold and maybe even non existent. Let say a developer had a very though time investigating network latency and therefore you established good relations to network operating architects or managers. A useful contact that you add to the stakeholder list, but maybe not critical to the application once the issue is sorted out.

On the latter view, your own impact on the spirit and culture. Might sounds unrelated to work. But it’s directly related to the footprint you leave. If the culture is such that some person(s) get used to your presence, chances are that the mood or emotions of at least one, is moved. Personally I see it as a tough thing to identify, I often being surprised who was really care, or not at all.

Anti pattern: Try to model the phase out

This anti pattern might sound like it contradict myself. However it’s a matter of what you try to model. It might be an tempting idea to model the “how to decom myself”-process. Similar threats and worries can be identified across, but the way they are impacted is different from time to time. An acceptable condition here would be a very high level, just to demonstrate how you are going to take care of it. More detailed and with dependencies added, is clearly an anti pattern. Increased risk to focus to much on a single point, loosing other aspects.

I can find myself make up a list of areas. Then i try to layer them out in a matrix. It might be a number of roles (or actors) and a number of concerns. Then I fill in in which way each role hits the given concern. After that, each cell in the matrix can get measured and followed up (in case all of this stuff is needed to ensure handover).

  • Does it need to be documented? How? Who is the receiver? Would it be 80/20 perfectness, or just some guiding titles?
  • If “high level”, is it really helpful? maybe this “high level” does not add anything at all? So I might just be less lazy and provide some more depth?
  • Do it need to repeat? Should the information be backed up and split into two. A more in depth to the most impacted and a more high level and broad, as supporting info to another related person?

The “reference list”

Assuming that there exist any handover material at all, one highly valuable is a high level document that cover areas, processes and stakeholders. And, briefly, areas that has ben initiated or engaged during the time. If enough time can be spent, also it’s valuable to document upfront changes and challenges with some “gut feeling”. Having SWOT in mind might add some value.

Even if it’s not directly related to the primary purpose of the project or system scope. As often are the case, this is ultimately simple as a Excel sheet with few simple tables stating contact and a reason.

Stakeholder lists is naturally short lived, but they do provide an valuable view about the blood stream of your work. A piece of organizational evolution can be tracked, once the gap is identified between now and the time where it was produced.

Wrap Up

In the end, it might be up to your professional acting of how to leverage and motivate the effort spent on each of those three areas, that will set how good you wrap everything up.

Link

Driving Architectural Simplicity – The Value, Challenge, and Practice of Simple Solutions

Simple architectures are easier to communicate, build, deploy, operate, and evolve. Architectural simplicity is not easily encapsulated by one type of model or practice. Several practices can be applied in combination to drive simplicity. Agile practices stress simplicity. Architectural complexity can occur based on many factors such as design ability and focus, technology evolution, and organizational structure.

What IT would learn of Construction industry

It continue surprise me how much the construction industry and IT have in common. Let’s take 5 seconds to the picture in the header.

The one to left show the components (in a city) while the model to the right show a strategy (in a IT enterprise). Could they e interchange and still have the same relevance? Seeing the buildings in the square as applications in a part of a enterprise landscape.

The visual expression of architecture between the industries is quite different (there is no obvious reason but culture and history), but some building stones that design in both construction and IT, is closely related in content. Let me show it in a number of concerned areas. The following points is sorted intentionally.

Enterprise

The strategic level. In an enterprise landscape, dozens of applications is easy to compare to dozens of buildings in a city plan. As in cities, the enterprise is divided into units. Somewhere, there are borders. Many applications do share or have several interaction points to the city as in IT landscape, and not only need to co-exist. But also for re-use. A construction strategy in one place can and should be re-used in another place. It’s about patterns and styles on different levels (architecturally and locally, i.e. a building or application).

Roads

As for the city, there are inter-connections between applications. There is co-existens between applications as well as buildings, where roads are used to access them. Network zones and virtualized clusters or the cloud is clearly domains of roads, where you pass cities or containers or even mega blocks of IT applications. Or the opposite, clearly defined borders where there is no allowed interactions. Some applications is more restrictive to what and who access, other more open. As Intranet to a plaza or market-place. Or as a prison as in security defense firewall sandboxes.

Water, electricity and drainage

Moving to a domain where we clearly have two levels of interactions. Local capabilities that are meant to facilitate selected buildings, such as information for selected IT applications. Pipes coming in- and out from buildings, they need to stand for ages but still be open for modification during changes in the building(s). Enabling of information can be materialized as connectionstrings, SSH connections,web based API communication or letter-based (as in AMTQ, WebSockets MQTT or so). But you oviously need also consider the city level of water, electricity and drainage. Ensure that the capacity meet the need of sum of all local facilities. From IT perspective, why not mention whole integration frameworks as Biztalk, WebSphere or MicroGen or ESB and network transportation.

Facilities

The smallest whole unit of this article: a building, or application. Inside a building you love to know that there are all from restaurants, parking places for cars to toilets, depending on the size and context of it. But you might also be aware of your application is aware of deliver radio signals, is a big data container, reporting platform, host hundreds of thousends micro sensors to control or read data from. As well as is specialized on connected QR & barcode scanners or contain components as computing grid or advanced financial algorithms delivered as a service for other applications or users.

Security

In this article, we started at a holistic and strategic view in enterprise and city level. Once done, we hovered over some more tactical samples. Next step is naturally the interdisciplinary parts. The baseline that both strategical and tactical share, to provide or ensure capability, robustness, availability and scalability. Frameworks and methods to ensure that decision makers provide resources in a structured way.

  • How secure is the roads around the city plan?
  • How reliable are network connectivity between servers, clients, wifi or cabled.
  • Is walkers safe because of the surrounding constructions?
  • Can users interact with application in a way that accidentally or inaccidentally make damage?
  • What about fireman and ambulance, can they easily goes out for help?
  • Is there even need to judge emergency transports? (which is obviously a part of a plan to define if “not affected”).
  • Once virus or malware enter the network or a backup need to be recovered, how quick can concerned security experts act and recover? Restore a backup, remove a virus or disconnect a computer that send inappropriate network activity.
  • Is the concrete which buildings and city built on safe from fire? Can thief’s easily enter and monitor areas unseen?
  • Does a particular building have considerable security concerns (jail, bank, social department) that need additional activities to be involved to the overall plan?

Similar to constructions to city and buildings in compare to IT in an enterprise as well as IT as per application. Security is explicitly verified on local concern (particular application), but considered in a wider scope (architecture) and assessed towards the whole enterprise. Why not through IT governance.

Quality attributes – Not just for IT, right?

Going from explained areas into some a pick of specific analogies.

  • Usage – How many use it? Who are the visitors and who are the customers?
  • Agreements and contractors – Who use it, why? for how long? and to which terms.
  • Accessibility – How accessible is the facility, when and how?
  • Agile, changeability – Add layer(s) of functional component,
  • Scalability – scale out (horizontally) or scale up (vertically)
  • Decommission – How can this thing be replaced or taken down
  • Business valuation – What value does this provide in compare to another
  • ..what would you add here?

Sum up

Design skills behind Construction and design skills behind IT may do have a lot in common. Let me shoot a non-educated but intuitive guess: IT is more skilled in the process and modelling than any construction. This because of much more intensive engagement between business and technology, and the speed of change that are a key factor within information technology. need of processes is apparent in more layers of the whole. Therefore Construction can find value in IT to learn about transformation, agility and speed of change.

Other way around IT would benefit by learn from construction, that has been around for centuries. Well known to million and billion dollar sized projects. They now how to work efficient and ensure delivery that works. Not at least, they may know how to define roles and hire cost efficient manpower that much higher percentage of the population may fit into, without require years of university just to step-in.

This article is based on conceptual ideas and comparison. And enable a discussion where IT very need to increase and strengthen a layer of strategy between the work orders and the business.

Skyddar du fjärravlästa sensorer och mätare från obehörig hantering?

På senare tid har det legat mycket säkerhetsfokus på fjärravlästa sensorer och mätare. Inlägget riktar sig till den som äger och är intresserade av att insamling från- och administrering fungerar och har stabil funktionalitet. Finns det redan punkter på agendan i stil med “är vår information säker?”, “har vi en plan om våra sensorer blir hackade?” på den månatliga agendan för dina utvecklare och arkitekter? Arbetar detta forum med en löpande plan med punkter på förbättrings- och som följs upp? Isåfall kan du kan sluta läsa här, för jag tror att du redan är på rätt väg.

[This article will be available in English during august 2017]

Vems ansvar är det att lösningarna är säkra? Vems ansvar är det att garantera att det finns kunskap och en dagsaktuell definition om vad säkerhet är?

Tillåt mig konifiera detta till tre tänkbara orsaker att säkerheten inte är på agendan. Kanske finns det höga tankar om att säkerheten redan är okej. Kanske är inte ämnet det är viktigaste på agendan mot för defekter och nyutveckling? eller tänker man att det inte kan hända något allvarligt. Sensorer och mätare installeras varje dag någonstans, kort sagt överallt och i allt. Exempelvis följa krav och normer för att bevisa energiutnyttjande eller timavläst konsumtion. I andra fall för att göra fastigheter mer attraktiva för uthyrning. I ytterligare fall i rent proaktivt syfte för att kunna förutse och åtgärda händelser innan de blir allvarliga. Med IoT har ett uppsving skett med nya sorters sensorer och för nya sorts användningsområden. Som en krydda på det goda, följer en liten svans av genomförd design och påhittade funktioner som inte genomgått ett riktigt säkerhetsarbete.

De sensorer och mätare jag syftar på kommunicerar över nätverk. De blir inte avlästa visuellt. Kommunikation sker vanligtvis över det öppna Internet med TCP eller UDP till centrala insamlingspunkter eller till datainsamlingssystem. Senare hamnar de i olika sorts lagringssystem (datawarehouse, bigdata..) för debitering, felsökning, analys och trendsökning. Påfallande ofta är informationen som skickas fullt läsbar under tiden den transporteras mellan nätverken. Är den inte i läsbar klartext såsom XML, CSV eller JSON, följer den troligen en väl etablerad och dokumenterad industriell standard, som exempelvis OPC ModBus. Därmed lättsamt att avkoda med standardprogram eller ett par timmars eget programhack med stöd av tillgängliga dokument. Sensorer kan i förekommande fall även administreras på distans. Men sensorer har funnits länge och skickat data och fungerat för debitering i åratal. Det fungerar ju bra, varför krångla?

“If it ain’t broke, don’t fix it”

Det finns öppna säkerhetshot – just nu

Det finns ett säkerhetshot i detta, på riktigt. Vad tycker du är det större hotet mellan avsaknad på säkerhet eller brist på att bekosta och prioritera säkerhet? Hot mot uppkopplade styr- och reglersensorer nämns ständigt. nyligen i SVT, så ytterligare evidens för påståendet behövs inte just nu. Medvetenheten kring dessa system och vilken påverkan de kan ha, ökar och breddar ut sig över världen. Ett enkelt exempel är fjärrstyrning av värmeanläggningar. Ibland är dessa system helt vidöppna på Internet, med publika tillgängliga IP adresser.

“Förbjud” försäljning och bindning av avtal som på något sätt knyter ihop säkerhet med en kostnad eller prissättning. Låt all form av säkerhet vara inkluderad, så ingen börjar dobbla bort säkerhet på grund av att låta kostnader och intäkter kunna påverkas av det

Det skapas dagligen någon sorts programvara med syftet att automatiskt söka svagheter och sårbara uppkopplade enheter och system. I några bättre fall ligger sårbara mätare och sensorer bakom ett NAT, vilket till viss del ökar säkerheten mot sådan spårning. En mer riktad attack kan däremot komma igenom svagheter i NAT-ningen och därefter komma åt uppkopplade enheter. Det är påfallande ofta som NAT-lösningar nedprioriteras, antingen undviks de eller så används amatörutrustning. Utövare av sensoravläsning kanske klagar över krångligare konfigurationer, att bättre hårdvara innebär högre kostnader för installationer. Kostnader som i sin tur skiktar upp kunder i olika segment. Påföljden, generaliserat, att det kostnadsmedvetna segmentet hittar på egna lösningar som de tycker passar sin kostnadsprofil. På den motsatta sidan finns större “drakar”, som tycker sig ha råd att bygga egna lösningar. I det senare fallet tilltar istället risken för lösningar som projektstyrts med agilt och funktionellt fokus. Om säkerhetstänket då var “momsen som drogs av i slutet på budgeten/tidsplanen”, är det hundratusentals mätare och sensorer som kan påverkas i samma hack.

Ute i verkligheten kan det saknas inloggningsmekanismer och det finns mätare som “rings upp” och tillfrågas en och en. Kanske det också saknas krypterade anslutningar, eller med certifikat som är ogiltiga eller gjorda med tveksamma utfärdare. Inte sällan görs nya installationer av mätare och sensorer tillgängliga med standardinställningar, publika långt innan de tas under kontroll av fjärravläsnings-systemen. En del hårdvara kan innehålla funktioner att svara på hur de fungerar. Skulle inte systemen gå att tillfråga, kan trafikanalys eventuellt avslöja vilken tillverkare och modell det rör sig om. Väl identifierad har tillverkaren troligen en manual utlagd på sin webb, eller någon partner lagt ut den, i god tro.

Vad kan göras bättre då?

Känner du dig osäker? Jag kan antagligen ge dig en genomlysning med fokusområden och förslagsplan efter några timmars instudering och med lite information om leverantörer, system och hårdvara. Men mycket kan göras på egen hand, utan att hämta sådant stöd. Först och främst: Prata om säkerhet på den egna agendan, ta reda på vad teamet anser om Informationssäkerhet generellt och det egna arbetsområdet i synnerlighet. Vad är definition avsäkerhet? Bygg ihop en lista på skillnaderna mellan nuläge och definitionen. Brainstorma om scenarion som kan uppstå om det inträffar olika saker. Vilken påverkan har det? Hur återhämtar man sig? Sen tillsätt och prioritera konkreta aktiviteter för att identifiera vad som faktiskt är svagheter och hur de ska åtgärdas.

Det kan idag löna sig att dra ned på innovationstakten, till förmån för att säkra upp med ett bottenvarv

Informationsäkerhet är något tvärfunktionellt som gäller mer än i det tekniska. Hur arbetar anställda med företagets information internt? Hur lagras och förnyas lösenord? Hur skickas de till kunder och medarbetare? Finns det moment i arbetsmiljön som försvårar säkerhetsmekanismker, så man arbetar runt dem (postit-på-skärmen-syndrom)? Finns det andra öppna system, API’er och tjänster som leder in på bolagets nätverk?

Den sista retroaktiva frågan kanske känns utanför “informationssäkerhets”-området, men se det indirekt med följande exempel: Ponera att en svepattack träffar bolagets nätverk Z1. Den exponerar en instans av MongoDB som råkade ut för en global svepattack & hackats. Den instansen kan sedan ha släppt ut information såsom ip-adresser till hundratusentals sensorer för vida världen. Kanske det även låg närtids-mätvärden på den databasen, så hackaren kan dessutom se vilka sensorer som tycks leva och vad de producerar. Det här är inte långsökt. Det hände senast i dagarna.

Som tur är fick hackingen ingen demografisk data, eftersom det låg i en annan databas och teknik – MongoDB admin

Det är givetvis ett stort frågetecken varför det lämnas internet-öppna databaser utan lösenord för admin. Men ibland är svaret att nyfikna utvecklare gör sitt jobb och testar andra tekniker, kanske utanför sitt expertisområde. Visar demonstrationer med kopior på riktig information. Kanske slarvar med att stänga ned tjänsterna efter att de testas. För att sluta cirkeln med ämnet publika (tillgängliga) sensorer och mätare:

  • Kräv som en del i det vardagliga arbetet att penetrationstester görs på information som går till och från sändare och mottagare. Utred vad för information som sprids och varför. Även principiellt, inte bara tekniskt.
  • Se till att någon eller några i teamet kan paketanalys samt dissekering av data på de protokoll som mätarna eller sensorerna använder.
  • Lägg säkerhetsarbete som leverabel i varje testbar funktion som programmeras samt även på övergripande nivå med design och arkitektur.
  • Om en säkerhetsmekanism prioriteras bort till förmån att hinna få ut en ny funktion eller legalt krav inom tid: Vad är planen att ta igen det senare? Vad kommer det kosta, när kommer det göras? Ha alltid en plan.
  • Anlita externa granskare för säkerheten för att avlägga rapport och förslag. Sätt en tidspunkt i horisonten där det görs en avstämning.
  • Någon eller några ur den egna personalen ska kunna förstå, nyutveckla och föra underhåll av säkerhet.
  • Närmare samarbete med partners och leverantörer av sensorer/mätare för att få bättre möjligheter att ge input som påverkar dem att konstruera säkrare enheter.
  • “Förbjud” försäljning och bindning av avtal som på något sätt knyter ihop säkerhet med en kostnad eller prissättning. Låt all form av säkerhet vara inkluderad, så ingen börjar dobbla bort säkerhet på grund av att låta kostnader och intäkter kunna påverkas av det.

Delmål alla system borde uppnå

Målet som säkerhetsarbete bör sikta mot är att skydda sensorer mot otillåten åtkomst. Allra minst ett grundläggande “skalskydd” som krav. Om det innebär att arbetet tar lite längre tid, inkludera mertiden som en kostnad för att senare effektivisera.

Det ska vara svårare att se informationen som överförs. Förutom att ha SSL anslutning, finns gott om metoder att också shiffrera datat under tiden det är i transport. Mycket grundläggande är XOR kryptering eller base43, men självklart är mer moderna krypteringsalgoritmer som AES att föredra.

Information om sensorer och var de finns geografiskt behöver skyddas. Dels hur datat lagras i systemen men även i den information som skickas under överföringen, förutom lagrade mätdata.

Integriteten i den information som skickas från mätare och sensorer ska kunna verifieras som intakt, av det insamlande systemet. Är den inte det, hur agera? Om den är det, har den mellanlandat på obetrodda källor innan ankomst? Hur verifiera på det, och sen agera på det? Tänk på att varje länkbyte har en levande nätverksadministratör.

Typical resistance threshold, showed in numbers

Some kind of natural law with treshold of resistance. Once one level is passed, then up to next “level”.

Let’s start off..

First you start with 0
– You strive or struggle to reach your first. 1.

Once you reached 1
– You strive or struggle to double up, being 2

Once you got it doubled up (2)
– You strive or struggle to get multiple of 10, being 20

Once you reached 20
– You start strive or struggle to again get multiple of 10, being 200

Once you reached 200
– You being tempted to gain another multiple of 10, being 2.000

Once you reached 2.000
– You, here, how hard can it be to again gain a multiple of 10?? You aim for the five numbers, being 20.000.

Once you reach 20.000
– Here you feel it’s enough easy to do something more of what you done. Why not again multiple by 10, so you get 200.000

Once you reach the 200.000
– You are probably very hungry to hit the legendary million count, and would certainly not stop here. You aim for another multiple of 10, being 2.000.000

Once you hit the million, and reach the 2nd
– You already get the thing but might be obsessed to pass the 10th million, or you stop care about racing, here.

 

When can this be of relevance? Pretty often, right?

  • Money earnings?
  • Number of sales?
  • Number of Dolce & Gabbana garments?
  • Website page count visits?
  • Number of miles driven?
  • Collecting of items of particular kind?
  • Number of reads of a LinkedIn-Post?
  • Number of Tweets you done?
  • Number of comments you done on Quora?
  • Citizens you impact oppionion on?
  • ..your bet!

 

7 signs for programmers being the next dozen manpower, open for many educational levels

It’s not hard to see that programmer slash developer going to increase significant in importance and in number of employers. This trending will for sure come shipped with changes in ways of working, scoping and how one can specialize! Let’s go :-).

  1. User experience designers, UX. Maybe the current most obvious trending. A strong emerging work role that is more about visualizing, modeling and describe requirements. A part of a coders life has for a long time been to design the frontend and indirectly, strictly speaking, tied whole business areas to the moment of feeling for the actual coder. Now coders can focus on modularize and produce competent and extensible code and create a design that meet something that users already agreed on.
  2. Test strategies with automation that today build traceability and business acceptance into the test framework. Including planning and driving the delivery frameworks such as agile or waterfall. The test manager and testers work as a new role here, often not touch a line of code but instead define acceptance criterias and measurements. Also they might drive required changes to the testability of solution. Coders are there to develop for the needs, not longer a part of the expectations to define and develop conprehensive tests and test tools themselves.
  3. Unsuccessful rate of IT projects it’s still high. Often due to lack of human resources, or key individuals that play to broad in the delivery and can’t deliver with full quality in all areas. Number of developers in a delivery must increase dramatically. The cost side for developers would be decreased scope and narrow use of each knowledge. This open for a new manufacture line thinking, drive a new level of skills and salary expectation to do a professional work.
  4. DevOps to automate and parameterize release and deployment. Abbreviations such as CI and CD disarm the (maybe) most challenging and specializationing of a coders role. That is to understand the often very complex and volatile relationships between dev, integration, test and prod. All of the sudden the continous deployment and continous integration can be done cross over organisation with a new kind of role, without passing developers and operational resources all over the building environments. Coders can instead isolate their work on build easy to parameterize solutions.
  5. IT architects is strongly going to be an agreed profession. The roles is clarified, contextualized. Several organizations start to agree in the big picture. All signs on this path give it more clear that architecture and architects is about a interface (not border!) to development and programming. Coders no longer need to “call out” to find out what integrations and components or deployments need to be involved.
  6. Virtualized servers, data centers, the cloud and docker. A technology paradigm that from ground is built to bring infrastructure as a service. Capacity sizing and physical limitation is now in the hands of infrastructure specialists. They deliver reliable and fail safe OS level, perfectly patched, backed-up and with disaster recovery inside the same. Coders instead focus on provide and deploy software functionality into the infrastructure components and increase it’s awareness on how build energy efficient, low-utilization solutions that are capable to scale out.
  7. Portable frameworks, microkernel and microservices patterns and software design. The separation of software functionality into portable components and increased asynchronous or loose coupling between them, make the developer able to be very-very specialized within it’s particular area. Also framework-oriented development is not future, its here. Specialization of coders is it’s capability to select correct framework and implement instances of those. There is simply put no need for more than just few developers with competence such as “full stack”, i.e. because of monolites.

Sum up

In many of the listed areas, there is clearly improvements in sight of the developer slash programmer slash coder role. A kind of purification of the role, which most likely help you as recruiter and you as providing or valuate the education and skills of a resource. In several ways I feel this is a sight similar to the former big “industry floor” where workers are divided into, and to stay within, functional areas.

A production line roughly starts with

  • transport to warehouse
  • selecting from warehouse
  • assemble and inspect <– many
  • test whats assembled <– many
  • repair what’s not pass test
  • finally send for delivery queue.

Separate mechanisms taking over in the dispatch area. The resources may practically switch between the functional areas, but can functioning just in one at a time and organizationally belong to only one.

When I convert this to IT, I find

  • requirement modeling
  • sprint planning
  • coding <– many (coders)
  • early stage testing <– many (coders)
  • testing <– many
  • repair, re-deploy in test, re-test
  • continuous delivering

A challenge to all this mentioned above, is the role that used to be defined as “senior developer”, sometimes lend from the architecture signs as “senior developer slash architect”. May we see many of them convert to lead developers, in line with test managers? or pure architects. Or technical projects leads. However, key positions is there rather then a dozen role. This latest style (archetypes) of developers is not where we will see the increased number of developers, the upcoming years.

New bounties waiting for you who hack Windows

Microsoft has announced two new bounties for you who break or hack functions in Windows.

Microsoft Windows Bounty Program Terms

Microsoft is pleased to announce the launch of new Windows security bounty programs beginning July 26, 2017. Through this program, individuals across the globe have the opportunity to submit vulnerabilities found in latest Windows 10 Insider Preview slow ring. Windows 10 Insider preview updates are delivered to testers in different rings.

Microsoft Windows Defender Application Guard Bounty Program Terms

Microsoft is pleased to announce the launch of the Windows Defender Application Guard (WDAG) bounty program beginning July 26, 2017. Through this program, individuals across the globe have the opportunity to submit vulnerabilities in WDAG found in latest Windows 10 Insider Preview slow ring. Windows 10 Insider preview updates are delivered to testers in different rings.

 

 

Here is a full program of current bounties, and some that was available earlier:

 

Microsoft Security :: Security Vulnerability | Report a Vulnerability | MSRC:

Microsoft has championed many initiatives to advance security and to help protect our customers, including the Security Development Lifecycle (SDL) process and Coordinated Vulnerability Disclosure (CVD). We formed industry collaboration programs such as the Microsoft Active Protections Program (MAPP) and Microsoft Vulnerability Research (MSVR), and created the BlueHat Prize to encourage research into defensive technologies.

 

What Albert Einstein know that is useful for IT

New link. Read the post here:

Einstein knew what would be relevant for todays IT

 

 

 

Some basics was known already by Einstein, that drive IT as of today

The idea to this post came on a completely normal tuesday evening and I am reading an old book I found in my book case. The book is a small summary of Einsteins General and Special theory on approximately 150 pages.

This is for sure a book that take some time to read. Not because it’s complicated, oppositely it’s quite simple. But it ´surprise me how much of the mechanisms in universe he describe, that actually is a kind of building stones for the humanitys unique ability to percept, learn and do better. Problems that can take another mamal dozen of generations to learn, by selective inheritance of genome (because weakness is naturally passed out) can be learned overnight by a human.

Okey, thank’s for the anthropology – What’s the IT deal in this?

I want you to read this paragraph in the book. This book is in Swedish, so I will give it word by word in Swedish. Than later amateur-translate it myself to english.

This is just one of many samples in this book, where this particular one is more simple to relate to in everyday events.

In this very moment, I want to emphasize Einsteins focus on that perception of the events lie in how different actors percept the same object. And what differences they recognize. On top of this, the different mechanisms and theories he need to invoke and describe to proof what each actor recognize.

So it came clear for me in parallel to this reading, and the reason to this post: This is exactly what defining and documenting an IT architecture is about. 

Let’s take some similaries. IT architecture

  • is a moving object in it’s space
  • have different actors
  • properties that have different impact based on actors (and the changes in it’s space)

Conclusions from the perspectives that Einstein are keen of

  • You take a viewpoint, for instance Kruchten 4+1 and define useful perspectives for the audience.
  • On the perspectives, you define views. The sample from Einstain define two perspectives, one is the pedestrian. The other is yourself, looking from the train wagon.
  • The views is those who require Einstein become scientific in his answer. Here is also where our competence make most sense.
    • To describe what happen.
    • Why it happen.
    • What objects is related.
    • Why are they related.
    • Is there other processes or views adjacent to this?
    • But not mentioned here, that have impact?
    • ..and so on.

I would not use this post to convince you that Einstein discovered the methodology of Viewpoints. It’s just a populistic way for me to tell you the importance and impact that viewpoints have. IT architecture could actually be seen as an organism, hosted as technology but driven by human. Some mechanisms is simply related to how humanity is hosting earth, and earth is driven by laws of universe,

I also want to to point out how Einsteins mastering the super clear viewpoint – perspective – views methodology all over the book. It has help change the view and understanding of the world building blocks for hundreds of millions of peoples over the world.

Can viewpoints together with such clear views change the understading for hundreds of thousends of IT systems around the world? Of course, yes! and yes again. It’s already doing so by some, for the rest: Let’s study! Once you master the methodology and have experience to define relevant viewpoints, it will be much easier to concentrate on how to provide the best scientific (or exact) fact to the views.

Thanks.

And some links;

Take a help by IASA Globals evolving of Kruchter:

SSA – Views, Viewpoints and Perspectives

Context Describes the relationships, dependencies, and interactions between the system and its environment (the people, systems, and external entities with which it interacts). Many architecture descriptions focus on views that model the system’s internal structures, data elements, interactions, and operation.

Einsteins General and the Gpecial theory:

Relativity: The Special and the General Theory – Wikipedia

It was first published in German in 1916 and later translated into English in 1920. It is divided into 3 parts, the first dealing with special relativity, the second dealing with general relativity and the third dealing with considerations on the universe as a whole.

 

We are in mourning of MsPaint – But remember the destiny of File manager for 20 years ago?

Loud barks does echo around the Internet regarding the discontinuation of MS Paint in Windows.

Somebody here remember the voices when MS did the same with “File Manager” (winfile.exe) into the latest versions of Windows 3.1 and the server Windows NT 3.5 era? The loss of WinFile was a catastroph in productivity until Total Commander save the world.

But that’s also a part of history and today only dinosaurs know about it. Now let’s mourn about Ms Paint and make some guess about what will be discontinued at about 20 years, at year 2037. What is your best guess?

Read more at Windows.com >>

MS Paint is here to stay

MS Paint fans rejoice: The original art app isn’t going anywhere – except to the Windows Store for free!

On top of this, Microsoft comment that Paint3D will be the next. Even though the titles, classic paint move out from Windows and need to be fetched from Windows Store.

Little better destiny than File Manager (that faced a large number of technical limitations towards modern operating systems.

https://www.theregister.co.uk/2017/07/25/microsoft_paint_on_windows_store/

Remember the invaluable software design patterns? It’s debt time

I want you to re-visit a time of my, and perhaps yours, when i was obsessed of Gang of Four, GoF. The super (?) popular collection of design patterns for OOP programmers to follow, when develop solutions and applications.

  • Developer role has evolved
  • Separation of programmers and strategists
  • Using patterns to communicate over principals
  • Identify value by investigate dependencies

GoF was a collection of programming design patterns that could be used to solve many common problems in object oriented software development. So much value they bring to the OOP style development.

In addition to the man hours I spent as a developer to read and learn the patterns, I also spent countless of hours to develop and implement patterns during the years. Like a bible, both in professional and in spare time projects. I was never challenge the importance of the patterns seriously, just followed it slavishly.

A sign not being enough senior. Or perhaps, as I would say today, not being questioned enough. As the programmer, knowing my patterns, I was not questioned what I said or did. Instead. my agitators were at StackExchange. I challenged my implementations and worked close to StackOverflow. Of course I was boiled with razor blades. But I got skilled, learned my lessons.

But the patterns were still bible, even on StackOverflow.

Within time I learned to look back to what I really did and also how I did it. Increased the holistic view and a whole thinking perspective. What did those patterns really mean? And I am still not sure why this view happened to me. Has the role of programmer evolved lately, where it’s more expected by the role to demonstrate the value of the code to strategists? and business? Sometimes the roles are (and should, depending on the organization and assignment) be mixed. Mixed as in strategist and developer is the same resource (for instance in a very high skilled expert resource. While this separate topic might be interesting, we re-connect focus to the patterns.

OOP Design patterns back at the time

Patterns was back at the times quite easy to demonstrate, because problem was solved with one or few tools and frameworks. For instance JDK/JRE or .Net C#. Collections as GoF cover most scenarios, so not follow one pattern was strange. But the missing important question to the patterns was: how the pattern was implemented. Not much questionaries’ or analysis methods to confirm that the development was valuable. Back at the times, I didn’t need to provide proof that the implementation will be measured in value. For sure, I would be questioned in terms of SOLID or even OOP, and boiled or blamed for every mistake. But I would for certain not be questioned how I can ensure business not losing money when CTO requested a new integration.

That was the good thing for me, bad thing for business. You know already why that was bad for everyone? Including the customers? Because it would for sure create a Business (or sales) vs IT (technology) culture.

For a strategist, the segregation and understanding of dependency between the value that objects hold or enable, is way more important. Not to mention the value that might be loss in one part of the system, if another part will be in trouble. I will demonstrate below. Let say I back at the time (programmer following design patterns) wanted to describe for CTO a pattern I built and how good it is, so I created models, because it obviously hard to traverse through code, classes, namespaces and technologies.

With this model, it looks all clean and good. But the 10-point question is: how does it stand if it’s described in value to the business? Sometime later, the CTO or business may ask to move a new logic in addition to this nicely described adapter pattern. Or may ask what amount of money we lose if XmlAdapter inexpertly stop work.

Impact in terms of (any measurable) value

By look into the model, it’s easy to trace the impact by disconnect a component. The XmlAdapter seem to be just to cut off. Assuming that the model is true, that this assumption will probably cause big problem. One might ask what means with ”Value”. The context is probably changeable over trends and times. The value might be monetary, number of deployments required, or components impacted.

In current generation of development of technology, it’s a losing concept to just throw a design pattern over the table and then implement it. A change need traceability, explanation from relevant views and be intentional. Documenting, structuring, adding traceability and communicate to stakeholders, confirming and have signoff might take more time then the actual development. But that’s the point. The result will be systems that are understood by audience, stable (or known reasons to not be stable) operational conditions and development, change and release that have a process.

How can a CTO plan for emerging trends or match the business rapidly changes, if the CTO don’t know the technology significance, between planning and deployment? Shouldn’t CTO know where the changes require re-factoring of half the codebase or just a minor?

How do we ask correct questions for this? and who care about the value? Simple answer: Make sure that there is a strategist role in the project, company or department. A quite simple way to challenge the valuation in the earlier sample, starting in a company from zero, could be like this from CTO:

“We want to introduce a manufacturer that produce nano sensors. It will for sure require a new adapter, but should behave exactly the same against TheDataHub. The difference I think of, is to make sure that hardware identification have space for 256 characters. See sample model”

We can also see the level of understanding from the CTO, which is really important. And that the CTO think he or she have an idea about what need to be done. According to the pattern we followed in an earlier model, a dummy-adoptee should be able to implement with a kick. Right? The tricky part should be to attach the functionality of connecting nano sensors, preferrably done in a separate space attached to the adapter.

“Cool, just relax – i code it and return to you when it’s done!”

The true developer (looking to myself back in time), would stick with this comment and start develop. I tell my CTO that he can relax while I realize the model with the new adoptee for nano sensors. So now the time for truth has come. Will it be so easy? In flexibility to technology changes to the business, do you think the CTO does float in the land of unawareness? On his hands there is most likely some external expectations from a manufacturer, that also might have development to do, to meet the CTO’s expectations.

Let us jump back to the view of developer. We must today be able to questionate models and code implementation in more ways. It’s simply not enough to just be provided a simple view. Once we are provided several (relevant for the situation) views, the strategist can ensure that the change or new capability get the correct attention and or resources.

Assume that the following model was more close to the actual implementation of the famous pattern? Which is a completely possible truth scenario:

An experienced developer can quickly see that the CTO or a architect, have some problem if a new emerging technology would be implemented here.

The strategist would have need to ask for some views that are not completely code related. For instance;

  • How would authentications touch the components.
  • What infrastructure objects exists? and their relations (for instance Database on a separate server?)
  • How do backoffice/admin connect to the components?
  • How many kind of readable objects
  • Each objects frequency is it, currently?
  • How does the frequency and degree of numbers of objects relate to and between the adapters?

… and so on. The pattern here is questions that may have significance on the design.

It does not say “we don’t trust developers”, it’s more like say “there is a nurse between the client and the doctor”

Value in adding an strategist

In this fictive example, it’s clear that we have a responsibility to support the way the application is meant to provide service to the users. Not clear is that we should leave all that responsibility in hands of a developer, with nothing more between CTO and developer. It does not say “we don’t trust developers”, it’s more like say “there is a nurse between the client and the doctor”. With very good reason. The view and concern is different. Also the tasks. Is simply not fair to place all responsibility in hands of the developer (or require the CTO to have very developer focused skills).

Let’s take some of those example questions to the model above, not much is answered. Right? We just see that the implementation really is an adapter pattern. Points earned there. But it’s more a adapter style, rather than a pattern. This pattern implementation will for sure cost a lot to separate into modules.

We can quickly see some required improvements, but that’s not the point. We already see that the lack of clear abstraction between datalayer and adapters require a very ill smell to stability and deployment, just to increase the length of the string column object name identifier. Also that the authentication is really in shadows.

Having a strategist, for instance an architect, architect alike CTO or analyst that know to ask the right questions to provide a bridge between strategic need and technical requirement, can really save or increase the room for improvements you have in the roadmap.

The days where the heavy / senior developer can do everything must be passed to history. That’s not to, as already emphasized, question the skills or understanding for the role, more because the concerns are different. The skilled heavy developer resources can deliver both strategy and code, but be aware that they provide value within different principals — and make a way to be clear about it. If not, this resource (you?) might consider if it’s really a developer, or really a strategist and continue with one of them, as the homezone.

/Jonas

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.

 

Read a recent page that list about the GoF patterns hers:

The 23 Gang of Four Design Patterns .. Revisited

Technical musings on Win8, mobile (iOS / android / WP7) and WCF The Gang of Four (GoF)(from Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley Professional Computing Series, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides). These 23 GoF patterns are generally considered the foundation for all other patterns.

Analysing capacity +-15 deviation in compare to +-0 deviation

Would this model make sense as a visualization of how analytical depth and skill could differ between individuals, beyond education and experience?

When would the #2 archetype fit in a professional work, where #1 does not? And oppositely. Could (or even should) everyone try to be capable to fit in #2?

Readable full size picture: Full size image

Each individual do have natural degree of talent in versatile and parallel approaching of a problem (& solving, which is a ability itself). Communities do often talk about skills, but rarely about how intelligence, the in born ability measured as IQ, do impact how performing the skill is improved or not.

Limiting to +-15 standard deviations from normal IQ, it include approx 94% of the population, people that you meet daily and probably at work (which eventually is more diverse, based on your site).

How one at mid right, in compare to mid left, of this range approaching a problem, daily tasks and gather decision is very different. But may look the same from outside or question survey.

/Jonas

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.

 

Why AI and IT-automation should not be for the IT, but instead the business

With respect to the ongoing trend of AI, robotics, deep learning, machine learning and a number of data analysis kinds. The key enabler of this industry is data, information. An era of technologies to be part of the future, already a multi billion industry. Referenced to this, one often hear “information is the new fuel”. So, therefore let us stop there a second.

What is fuel to you?

Think of it as a methapor.

Then think of what fuel is, what function it have, in a generic perspective.

What can the cost be not to challenge the current association to fuel? If the relation and understanding of information and data is not very clear, how can one ensure sustainable development of the new technologies? I wan’t to challenge this in the higher view, and explain why I want you to care about it.

For instance: A composition of atoms that you put into combustion. After combustion we have a new composition of atoms, something else. A catalythic process. Or so. Irreversibly blended with other similar kind of atoms. Unable to trace back to it’s source or original structure. The cost of fuel is producing, not by using it. Once fuel is there. it’s ready to serve. Does this fit into the description of data to IT? If yes, might you violate architectural foundational principles by neglect traceability?

For completeness, I choose to introduce with IoT. The reason is that IoT matter as an newcomer as enormous producer of data. But do IoT produce data? Well, yeah! To a cost? Maybe not obviously in first though, but it’s quite easy to associate a cost immediately to IoT. Hardware to produce is for me, cost. The engineering of IoT hardware is in infancy. Also to mention the feasibility of the implementation phase. Of course I include the security aspect in that comment. IoT is meaningless and belong to kids corner until it ensure security and information receipt acknowledgement. Where is the business value here?

Machine Learning and Deep Learning is definitively about data. But does it produce new data through cost, such as IoT? Not directly, one can say. This may be seen as the instrument used to transform fuel to power. As a technology. Still, building and maintain it is connected directly to cost. The algorithms and analysis of information or data to produce new data by creation of scenario figures. Figures that come with human effort, training and evaluation. And on top of that, one need data quality recognition and classification. Every time you end up in a change of an algorithm, the old data might be useless and need to be recreated. Should I care to mention cost of power during computing? Where is the business value here?

Taking persistence into account. DW and Big Data plus dozen of “localized” technologies for the structured data, in addition to the physical storage to persist data. As it is for IoT, all might agree that this is engineering. Obviously not about fuel! But you might be tempted to associate information to the fuel persisted on it, as gasoline in a gas tank. But wait. This storage mechanism is there to keep the power until it’s released. Such as a battery or transmission/gearbox. To the cost of this objective; think of the mandatory methods and styles to agree on transportation, data format, availability and quality selection. This breeaathing red colored dollars sign. Costs just to produce availability/usefullnes of data for decision making. Where is the business value here?

To this picture, add information. So, now most of you might disagree or say that of course there is business value behind all this. I bet you are right. Information is common in all areas, but is it fuel in any of them? or are they just supporting or consuming functions where information is a part?

Is automation or AI about replace humans with machines or services? Automate services without direct human actions. Invent services in areas where human can’t or won’t as of today. Oh yeah, here we will find increase of income or increase of defense. Or increase of service level. Reduced cost. Increased ROI. Happy business user and the CFO that love to invest and see technology help to create value. The key is automation. More about that soon. Now time for a mandatory parable.

Parable alert

For more than hundred years ago, automation could be to put cogwheel between a winch and a arm to rotate. This to make spans of iron ore transport from a hill to ground. A person need to rotate this. One may think that it was a horse or two, but a horse must be managed by men. And it take a one to one relationship between a human work hour and efficiency. One day, one attached a rotating axis from a completely different invention domain, steam engine, to make the rope winch rotate. Suddenly iron ore could be transported 24h a day. Never get tired, always transport, regardless of time to deliver. It now took one person 5min one time an hour to put coal into steam engine owen. As of a sudden, little later, a good income could be doubled by install another line of iron ore transport, with same purpose. Eventually little faster, more reliable and need less coal to run. Invention, optimization, cost reduce.

Is this an obvious business case? Similarities to what we do with IT? Yeaah, little. And this is finally the point for rest of the article. Robotic, Machine Learning, deep learning is all about IT. IT is mechanic, strategy, methods to produce, use and re-use the information or data. Purpose? convert to power, together with fuel. The fuel itself is not in any of the earlier metaphors. In the parable above, the very transport (movement = service) of iron ore could be seen as the product of all mechanical composition. Compare this to IT deliver a web-based renting service. Fuel would continue be the business decisions and ideas, that drive the innovation and invention of technology and usage of information, converted/combusted into power that push energy into the transmission and gearbox.

The questions I will let you take with you is (from IT perspective); Are we those who should define and invent AI or automation? To which cost or increase revenue do we replace or incorporate automation that make sense? That is a business question. IT now sit on extremely powerful platform of technology. Maybe you can’t have control over the effects, when apply it in a greyzone where business decisions is not present. Think intentionally, IT is here to deliver effecient technology strategy to business decisions. While business is here to provide adequate services to customers, whatever their usage is. So what I can see, Master of Business Administration is there for business development and interface to the customers. Right? It’s may be tempting that IT might take share of the market to increase the “IT drive the business” perspective. The deeper technical knowledge of the tools can make IT advocate for business, what capabilities they should design services for.

It didn’t work well at dotcom-bubble at about year 2000. I assume that the finance and IT around the world is more mature and can’t repeat the dotcom-bubble like it was, but i’m certain that we can (by greed or by accident) repeat troublesome and time consuming decision patterns and styles where IT is used to meet business. Eventually because we are new to read the volatile consumer market as of today. Remember the intention of the technology strategy. I trust business to make decisions and requirements automation and AI, but IT to provide the technology strategy. With this said, this is an interpretation of automation. One can say it’s wrong, one can agree, one can choose both or none of it. Reflect, analyse and comment! Thanks.

Bottom note: If you like more peoples to read this, just press a like or a comment. For LinkedIn it’s enough for spreading the word, in contrast to many other networks that require sharing.

/Jonas
@ImmerseIt

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.

 

PS: Interesting analysis of emerging technologies, with respect to this article. The scope in the link would be far outside the year 2017, but indeed this is the year to start watch out for it. Also as you understand, I still see AI as the services to business and end users, not a technology by itself.. =) http://cognitiveworld.com/article/emerging-technologies-watch-2017 DS.

Snapshot from a mentorship on optimize how business and IT to approach each other

There is a really really great opportunity to mentor and help one in thought of something. Either direct advice or as a sounding board to provide wider understanding, another view or new insights. I also find the self-recognition effect very amusing and the challenge to transform what I apply myself into written or re-distribute like into this article. Okey, enough gaiety and back to the topic! This time the mentoring was about this (classic) topic;

Business are bad to describe to IT what they want. How to make them understand what we need?

As you can see, I was immediately pushed into a classic round: Business vs IT

Anyone with a sense for humor do understand the point of the model. However, there is usually a gap between IT and business. Intentionally and by the book, one can think this is a junior question and following is just a skill curve. But fact is there, described in different job descriptions, in pillars of an IT architect, as well as a program manager or an project lead. Even whole programs is defined and driven to reduce this symptom. There are a lot of value in agile IT waiting for this gap to reduce. Every discussion like this, is little step closer to provide this value, to users, employees and organisations.

The best part of it all, it quite simple to address and mitigate. Just by mindset and patience.

This time I was given a chance to emphasize Engagement as the way forward. Opposite to contractualization or spending time to claim, restrict and explain definitions. Of course a documented baseline can (and should) co-exist, such as “how business should define a requirement”. But as soon as that tool is used just to “throw over the desk”, there will be a “how IT should meet our requirements” and then we can start round 2 and everyone fail.

Instead, we consider engagement. The key is to build dynamic and soft touching points, reflecting the situation and scope. Either the method for the requirement delivery from requester (business) to receiving IT (developer), the content to be discussed, documented as a test case or similar. Based on the change management culture in the department(s).

The shared forum should be technology agnostic and driven by the need to complete both sides perspective of the requirement.

I like to ensure generic approach wherever possible, to enable re-use of methods and thinking. Not being bounded to a product or vendor, not require the other side interact with specific tools except for general office tools. The “engagement” itself may be a email chain, a phone conference, meeting over desk, shared screen viewing a Kanban board, or all of them. The point is to have the together-approach when decide and agree on how recognition meet requirements.

The deliverable from the engagement into development would be very product specific. But the requirements or ideas need to be iterated through a templated assessment, a set of questions like the next model below. It’s easier to mention “what to define here, in the acceptance criteria?” out from a defined template.

If a item does not apply, just mention it does not apply and pass to next (but do mention it, don’t just remove the question). If an item can’t come to conclusion, just document that and return to it next time (classic backlog).

This much, was approximately what we was able to conclude during our 30 minutes session.

Finally transformed into the point of increased value

Beside this, it’s open to take next step with connecting continuous integration and continuous deployment to the requirements and automated reports for test acceptance approval by business. Let’s that be a later story. Or your story.. =).

/Jonas
@ImmerseIt

Monolith First – When MicroServices make sense

In the actual microservices hype, I want to emphasize the point to be extra careful where to implement tje MicroService architecture, so it’s not happen to be “by all means” because many talk about it. Read this article that share good insights.

 

bliki: MonolithFirst

evolutionary design · microservices tags: As I hear stories about teams using a microservices architecture, I’ve noticed a common pattern. Almost all the successful microservice stories have started with a monolith that got too big and was broken up Almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

 

Where do they go, once you let them go through your port (number)?

As you know, UDP is stateless (or to called connectionless). So there is no guaranteed delivery to destination (nor promise what being returned to you).

UDP grow rapidly in popularity last years. Common known areas is online gaming, media streaming, online collaboration and even some data intensive business applications rely on UDP. It’s also nice to mention services like Wifi loves UDP to distribute discovery information to the network.

However, with UDP came critical levels of security risks. Beside that you, by protocol definition. don’t know for sure if the packets is delivered (or not), you also send information that are capable to redirect, transform or drop without further notice. When they return, you think you know what come back – but don’t know for sure.

So the retoric question: Do you care or know in detail what data you provide in UDP packages? Once the data is kept in a encrypted transfer, is fine, as long as the encryption is secured.

Happy weekend 🙂

 

This posting is sent to remind about not produce exceptions for flow control!

Why? Because the code will be hard to follow. One path for the exception and one without.

Why? Because the exception may not be good for what’s really happen.

Why? It might be re-written without knowledge of what happen in the methods with the exception control flow

Why? Because you would probably not reverse trace all instances of your exception

Why? Because you are lazy

Why? Because you know that you won’t reimplement the methods.

Why? Because you find out it’s not within the scope of your job.

Why? Because the boss will ask what you are doing instead of the described change task.

And by the way, maybe the automated tests or other mechanisms do, because they might rely on the exception flow.

And then.. then you have to ask the boss why tests failed when you worked on something else.

– Just do the the exception handling properly! Normalize the collections based on agreement of data structure (or on the lack of it) during increment. Create specialist classes for anomalies and exceptions, such as it was new programs to handle the code.

Data science evolution 1960 to 2040

Isn’t it amusing to find out how paradigms have passed in the computer era? Let me share some weekend amateur drawings.

1) 1960-1980. Monstrous servers and ultra dumb clients.

 

2) 1980-2000. Monstrous clients and pity servers connecting to outputs. (VB6 and access databases loved to be close to the client).

3) 2000-2020. Enormous servers and enormous javascript heavy clients exchange enormous amount of data everywhere. What’s a paradigm here?

4) 2020-2040 (speculative). Connected clients is units, owner of it’s own data, stored locally. Data transfer between units and a kind of interbackbone is just descriptions, models, meta, statuses, point-to-point-connectivity agreements for subscribers. Interbackbone is just there to transport data, act services, agreeements and do physical transfer between networks. More like IP dgrams in the lower layers.

Have a nice friday and weekend! And remember to do your homework.

Is this how Russian through IT promote Trump to non-US citizen

Russian hackers appear to think out of the box, when provide and distribute messages to inter-wide public.

Webpage statistics

It’s quite worrying to read about the modern time attempts to fabricate, misleading information and news. It’s also worrying to see that organizations or hackers try to use weak spots in the TCP and UDP protocol to provide messages to the world, not only to trick resources that consume it but also to its design. The provided screenshot above is taken from two different web pages and we can see completely unnormal activities here. For this particular presence, there is a language-type header provided in the TCP used by client and server to determine for instance web language capacity and preferrables. Short version. As a side note to the article, i have no fact to proof russia as a source, just that they shows up to be. An interesting situation when talk about false inforlation. So lets continue from that stand point.

This header is fully possible to amend and do stuff with, with just a normal development knowledge. It’s not even considered “hacking” to amend those fields to whatever . What we see here is new kind of grep in a information war. Hey, these headers are there to be used for customization, to make webpages and clients customizable to fit the style that the clients want. And for server to load and balance the correct resources.

But unfornately it can also be used as an alternative way to spread and provide propaganda or other kind of information. It’s kind of stright forward since long time a (by hackers, mostly) useful channel to spread information “under the radar”. During the years, several weakness has ben exploited in servers and clients by malformatting those headers and how they looks like. And also fixed with hundreds of patches in all kind of layers and applications. The issues exist and are fully possible because computer systems traditionally are built upon “keeping good sense is win win”, so owners and developers does develop up to application stability. Security has often ben the black sheep associated with uneccessary high cost because of: “do we need it to work? Does it work anyway?” No, yes..

Last years, we come to see that the reduce in cost did not dissappear. It was just moved forward in time and classified as “security threath” instead of being included in development sprints from beginning. If any good out of it, is that we now have new IT professional titles such as “Security architect” “Security specialist” and so on. Those have now a job for a life time.

To be honest. More annoying then worrying is that the worlds most used communication method is so depending, on transport level, on just two transfer protocols. (I put a BUT on this comment, to a later posting). Both of them relying so hard on the sending- and receiving application for their security. I want to mention the link and hardware layer, but that chapter will dig us into a black mall of mud open for exploits and to spread desinformation.

What do you think we should do in nearest future? SSL, two way factor encryption is just a way to hide information from the wires and waves. If its too efficient, we will start to worry about other war related challenges. But the core of the issue is most close to a solution; The information that can be sent, is constructed in application level developed by programmers. It’s received by applications developed by programmers. Programmers can be hirest, have their own agendas or other purpose that does not follow the purpose with their work. Employees that knowingly leak information to foreign purposes. Software security is hardly of help here. Applications can be patched. Rely less on the TCP data provided in the headers, have better mechanisms on how information is transferred to reduce the risk of being hacked. In this Machine Learning, Deep Learning and AI algorithm days, we also need to take much more care about how much descriptive meta data applications do provide. Also – how much descriptive meta data key positions within infrastructure and application owner and administrator level can leak. Accidantely or deliberately.

I see a year 2017 where most developers will stand in front of the questions:

  • How do you secure your code?
  • What is security for you?
  • What does the word “responsibility” mean to you, when you produce safe code?
    • not in terms of memory leak or machine safe: Means information safe
  • Have the system you worked in ben hacked?
  • Have you cleaned up or traced activities from a hack attempt?

Im also almost 100% sure that we soon see insurance firms add additional services for costs that can be related to security threats. Both private, companys or whatever kind of customers.

The florist steals your data

I planned to steal information based on possibilities on my current flower care program and personal interest to earn money and give the rich world side effects of greed. Yeah – plant flowers. Not really my home genre but why would it be important. You would be surprised how less people care about flowers in the office. Let’s concentrate on two very interesting customers of mine. Five visits a year on this traditional big bank, let’s call it Sach. Three visits a year on another new coming popular Internet based bank, lets call it Prls. How hard can it be. Luckily nobody recognize that I work for both of them, even but the agreement is in the third year now.

This third year was about to be special. After this year in total eight visits, I have added a dousin of sensors in different areas of the offices that I care. They together collect approximately 1GB of data per 24h. Already in mid of second year agreement, I proposed a replanting of flowers next year. More specific, the 2nd visit at the large bank Sach and 1st visit at the Internet bank Prls.

In parallell I already tried a personal study of the planting. I had a year to find out that the flowers will survive. When the time for replanting occur at the customer, I have full responsibility over everything from bring flowers to- and from the office and including the potting soil to use. Nobody watching me and could care less of how I do my work. So except planting, I did also add  two water-resistence battery bays and a wireless hotspot in the pots. Special manufactured hardware and sotware for this purpose, that broadcast on a hidden network on another range. From the batteries I pulled thin wires inside the thicker stems of the bigger flower, pluggable cords inserted from one of the lower branches. The smaller flowers can’t use the wires but can collect sound and movements.

Next visit after the replanting, nurture and prune the flowers, I not only do this. I also verify connectivity and read the so far collected data into my smart phone, during the time i spent on each plant. I also punch a microcamera into the plug spare below the bottom bransch. Just 3 millimeter in size but with quality just as good as I can do face and basic image recognizition. Even smaller sound recorders in all flowers and not least, sensors that register possible movements. Next visit I will be able to transfer approx 100GB of data per flower, that will be stored since my last visit. A success transfer will take about 15 minutes and automatically clean the memory card once done. I walk on to next big flower. The smaller flowers does collect sound but also information about other networks in the office.

After each visit, I sell the data unstructered on dark web on auction. The buyers get exclusive rights (Well they can of course steal it if they want, but we trust each other) to re-distribute as long as they return to me with useful information they extract. The pay is mostly BitCoin and unfornately I know almost 100% black market and worser. In turn I use the money to pay intelligent developers create nice techniques and software algorithms for me. Why would I have moral panic for providing information from organizations that more or less steal money from tax paying people, instead of show a truly interest to reinvest them on a better world. As soon as more legal money is reinvested to the people and a better world, I will stop steal and distribute the information. Last time something really useful happened, was when some gigabyte of mail server information was exploiting some tax-free accounts and related rubbish coooperation.

What they do with the data? I can imagine. Myself is running automated pattern analysis so I identify for instance faces visiting pattern. I know for instance how many cup of coffees or visiting time in a particular toilet customers have. Funny enough the algorithm find that a higher percentiles of visiting time is incredibly higher at about 15 o clock until office close on afternoon, then the rest of working day. Algorithms do also set placeholders on media where there are more then one voice involved, but just one a time talking – appear as a chatting. Again, the coffee machine bring many discussions and facts.

Looking back

It begun back at 2013 when i was unemployed, i met a guy at a bar. I was alone at a round table with space for four in a rounded shaped red leather sofa. In front of me, except some other tables and the bar, I had a book with the title “10 paradoxes in human behaviors” on top of another book with a very basic title “Computer Science” on the table. At that very moment I also had a boring Pang IPA beer on my table. For me, nothing was unusual with this except that time I met this guy.

He asked to sit with me for a while, having some questions around my choice of books. Later he asked me about my daily life, and then tell me a little about his. I lie a lot to him, described my work as a buyer on a book store nearby. I read a lot of books and have an idea about what people need to hear for me to be trustworthy. This way we started to know each other and he proposed me a job as a florist, to work at offices to take care of their flowers.
__
A maked up story by Dundee

Hur ska IT-branchen bemanna och finansiera de drömlikt stora projekten?

Låt oss starta med denna parabel: Anta att byggsektorn får ett projekt, ex förbifart Stockholm, Öresundsbron, Nya karolinska, Inlandsbanan, Citypendeln och så vidare. Alltså riktigt stora projekt. Sådana som tar några år att slutföra. Man mobiliserar en viss omfattning av arbetskraft för en tid.

Låt nu säga att vi mobiliserar samma omfattning av manstimmar och projekt för att bygga något inom IT. Vad skulle vi bygga då? Du kanske har fler idéer än mig, men för mig är tanken lite svindlande. Kanske man kunde genomföra omkoppling av den värme som enorma datacenters generar, till att värma upp bostäder och kanske hela markareal? En redan väl diskuterad idé och redan är i ropet på flera håll. Jag tycker man kan tillåtas fantisera och överdriva. Det är ju vad som emellanåt sker i andra byggprojekt. Man har nog också skrattat åt tanken på att bygga snötunnlar i den Arabiska öknen, byggnader som Burj eller varför inte en tunnel mellan Dover och Calais. En konstruktion som Öresundsbron byggs inte så ofta. Inte heller Högakusten-bron. Inte heller Turning Torso. Tellus Tower byggs ju inte heller så ofta (knappt alls, dock).

Men när och hur ofta byggs ett högprestige IT projekt? Var är visionerna här? Varför verkar de IT-projekt som hittas på så svåra och krångliga? Diffusa och höga risker avseende levererbarhet och pengar? Någon har kanske redan snudda vid tanken att det inte går att få tag på resurser att genomföra enorma IT-projekt. Kanske någon hunnit börja skratta åt kostnaderna det skulle innebära. Att bygga något riktigt stort inom IT, hur skulle det betala sig? Kanske också den som är verklighetsmedveten och jordnära ser ett ökande hot här. Att IT kanske halkar efter. Det går inte att finansiera eller ens genomföra ett stort projekt i landet? Skulle man kunna mobilisera ett slags IT försvar, ponera att de pågående spåren av informationskrig tilltar?

Bygg ut – fram för dussinyrken inom IT

Det är min fasta övertygelse att IT är nog moget att ta in ett större omfattning av dussinyrken och medelmåttigt utbildade och kunniga medarbetare. Normalutbildade eller inte alls (på området) utbildade som på en introduktions- och utbildningstid kan ta sig an en uppgift, utföra den med bravur och få en normal lön. Om och om igen. Ett nytt verkstadsgolv, industrigolv eller stora styrkor tränade för ett ändamål. IT av idag är någon sorts överskattad bransch när det kommer till kompetens. Det fungerar inte att ha bara specialister och generalister. Att utbilda sig inom IT idag handlar om att så fort som möjligt kunna allt och med rådande kultur ska man snabbt avancera och kräva löner för det minsta man åstadkommit. Men det behövs automatiseringar, kvalitétstester, stickprov och inte minst repetetitivt mängdarbete.

Precis som gruvdriften kunde fixa till redan på 1600-1700 talet? Låt mig dra en parallell på LM Ericsson. Låt mig också prata i dåtid, eftersom det är dåtid för mig. Självklart är det här också levande nutid i vilken tillverkningsindustri som helst. Testare, felsökare och montörer kunde tas in i princip från gatan. Det togs även in före detta undersköterskor och sjuksköterskor i stor omfattning (eftersom det blev bristyrke och dåligt betalt under en tid, så ett antal lockades till industrin). De började med kort intro utföra arbetet med bravur. Felsökningar, reparationer och utvecklande av nya produkter krävde däremot det lilla extra. Ingengörer och konstruktörer, men sen erfarna och duktiga arbetare “på golvet”. Internutbildningar och några års erfarenhet räckte för att kunna sticka ut med kompetens för detta. Exvis för att programmera. Ingenjörerna fanns (finns!) där att justera och fixa när något föll utanför automatiseringen.

IT en yuppietrend – en ny slags golf

Visst är det spännande att se att IT yrken generellt ställt sig högt i kurs. Karriärister lockas hit och det är lite spår av en “ny slags golf”. Folk med allt högre profiler söker sig hit, IT bolagen söker också folk med högre profiler. Det kommer löner, bonusar och förmåner som börjar likna nivåer som annars mer kopplas till prestige. Inte helt sällan hörs också att IT ställer synligt i grannskapet bland det annars “glammiga” Finans. Jag skulle också vilja snudda vid det löne-, förmåns- och bonusrally som pågår sedan många år, mellan och inom de som etablerat sig i IT-sektorn. Ett rally som håller högre profil än tillströmningen av kompetens. Är det nu inte dags för IT att börja rekrytera de tyngre korten från väl inoljade tillverkningsindustrier? Välkommen hit, Volvo, ABB, Skanska och så vidare! Bort med tjänstesektor-essen! IT behöver dussinyrken nu, där man snabbt kan rekrytera och bemanna i mångfald!

Börja transformera!

  • Ta Academic Work pionjärsanda med månadskurser på allvar: Ja, det går att göra IT konsulter på några veckor. Men gör den ännu lite mer effektiv. Ännu smalare resurser och ännu fler.
  • Ta in profiler från befintliga och erkända tillverkningsindustrier och verkstäder med god vana av att mobilisera hundratals och tusentals anställda. Varför inte ta en titt på värnplikten.
  • Arbetsplatser med dess geografiska placering inte är central längre. Utvecklandet eller testandet behöver inte ske där kunden sitter. Det har sagts så många gånger, men tänk Ericsson Cables. Byggde man kablarna där de senare skulle dras, eller skickade man dem på lastbil?
  • Sluta bemanna projekten med resurser och personal så seniora att de snudd på kan driva hela projekten själva. Låt arbetsbeskrivningen för dessa starka förmågor handla om att bli ansvariga för att leda och transformera deras område till att passa för dussinbemanning.
  • Släng ut affärsledarna ur IT Projekten. De kan driva sitt affärsintresse, men inte det tekniska projektet. Placera folk som känner till verkligheten och kan sätta ned näven i styrelse/ledarmötena om vad som faktiskt krävs på de olika idéerna.
  • Låt oss komma ifrån löst sammansatta projektgrupper där 50% av kompetenserna överlappar. Grupperingar där kanske en tredjedel känner sig i starkt behov av att avancera och tänka på annat än det de faktiskt ska göra i projektet.

Med det har jag tillåtits fantisera lite och tagit ut svängarna. Den fasta punkten i de uttagna svängarna är att IT har en potential i att expandera med en ny “baseline” för den kompetens som ska behövas för att snabbt konstruera större IT projekt och med högre tempo. Ett nytt sorts “alla kan bli programmerare” men där innehållet i uppgifterna är långt smalare och automatiserat än vi tänker idag. Som ett automatiserat produktionsband, men eventuellt utan “kisspausknapp”.

/Jonas
@ImmerseIt

Jonas Nordin | Professional Profile | LinkedIn

View Jonas Nordin’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Jonas Nordin discover inside connections to recommended job candidates, industry experts, and business partners.