Speakers

Meet Agile & Automation Days speakers.

KEYNOTE: Karen N. Johnson Jamf

BIO: Karen is a longtime active contributor to the software testing community. She frequently speaks at conferences both in the US and internationally. Karen is a contributing author to the book, Beautiful Testing by O’Reilly publishers. She is the co-founder of the WREST workshop, the Workshop for Regulated Software Testing. She has published numerous articles; she blogs and tweets about her experiences. Find her on Twitter as @karennjohnson (note the two n’s) and her website: http://karennicolejohnson.com/. Karen is Director Jamf Now, Development & Delivery at Jamf.

KEYNOTE: Stephen Janaway Bloom & Wild

BIO: Stephen is VP of Engineering for Bloom & Wild, the UK’s most loved online florist.

Over the last 16 years he’s worked in coaching, training and leadership positions in companies such as Nokia. Ericsson, Motorola and the YOOX NET-A-PORTER GROUP, as well as advising a number of mobile and e-commerce companies on development, testing and delivery strategies. He written and presented many times about software development, testing and delivery, and is co-curator of the Testing In The Pub podcast.

Keynote: Check This – Test Automation, A Development Managers View

Test automation belongs to the testers and as testers we care about quality more than the rest of the development team do, right? It’s easy to think this. I know, I’ve been there, as a Tester and a Test Manager.

But 3 years ago I made a change. I started managing the whole development team. I started to see how the whole team should use test automation. I saw how we could get more efficient as a team when we all became responsible for quality.

This talk is about that journey.

  • How automation is a vital part of a good test strategy but it’s not a ‘testers thing’.
  • Why automation won’t rescue you from regression testing hell.
  • Ways in which development managers can ensure that the team own automation.
  • Why a good strategy smooths the flow of deliveries in a team.
  • How you should treat your automation code like production code.
  • Automation is just another way to ensure that a team can own quality. Remember that your one job as a tester is to help your team own quality.

Anand Bagmar Essence of Testing

BIO: Anand is a hands-on and result-oriented Software Quality Evangelist with 20+ years in the IT field of which 18+ years in the software test field. He is passionate about shipping a quality product, and specializes in building automated testing tools, infrastructure and frameworks.

Anand writes testing related blogs and has built open-source tools related to Software Testing – WAAT (Web Analytics Automation Testing Framework), TaaS (for automating the integration testing in disparate systems) and TTA (Test Trend Analyzer).

You can follow him on Twitter @BagmarAnand, connect with him on LinkedIn at https://in.linkedin.com/in/anandbagmar and read his blog at https://essenceoftesting.blogspot.com.

Presentation: Measuring Consumer Quality – The Missing Feedback Loop
session level: intermediat

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about “loosely” – is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a “magic” formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers
  • How do you know your product is working well?
  • How do you know your Consumers are engaged with your product?
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change?

Take-aways:

Attendees will have deeper understanding and appreciation of the following:

  • What is Consumer Quality and how does it help shape your business
  • Ways to measure Consumer Quality
  • Why is understanding Consumer Engagement vital to the success of your product

Workshop: Analytics Rebooted – A Workshop
session level: beginner

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket – in terms of # understand their Consumers (engagement / usage / patterns / etc.), # understand usage of product features, and, # do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” – is not a myth!

What this means is Analytics is more important now, than before.

In this workshop, we will not assume anything. We will discuss and learn by example and practice, the following:

  • How does Analytics works (for Web & Mobile)?
  • Test Analytics manually in different ways
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see demo of the Automation running for the same.
  • Time permitting, we will setup running some Automationscripts on your machine to validate the same

 

Łukasz Rosłonek Allegro

BIO: Łukasz is a dedicated Test Engineer who advices teams in implementing continuous delivery solutions with use of test automation and TestOps practices. He specialises in testing distributed architecture and connecting agile mindset with top notch technology. Big enthusiast of open-source software and continuous testing approach. Author of the testdetective.com blog and frequent speaker at various IT events. After hours, a guitar nerd and a books fan.

Presentation: Test automation vs distributed architecture
session level: advanced

While distributed system architecture is becoming a new standard in case of web application engineering, functional test automation is still struggling to catch up. Knowing the cost of frontend automation and having the majority of domain logic exposed by REST APIs, there is a strong need for robust and reliable end-to-end automated testing based on the backend side.
There’re many challenges in distributed systems for test automation: asynchronous calls, messaging protocol, domain modeling, just to name a few.
In this talk we will go step-by-step through designing and implementing end-to-end testing framework for microservices. We’ll dive into isolating environments, test strategy, code and automation. All of this based on lessons learnt from building automated a test solution for the real-world, complex distributed architecture.

Take-aways:

  • Robust test automation strategy for microservices architecture
  •  Technical challenges of testing distributed architecture
  •  Domain modeling approach in automated tests composition
  • Tools and frameworks for implementing REST API test automation
  •  Handling Asynchronous communication and message architecture in test automation
  • Building microservice CD and Engineering pipeline around test automation
  • Lessons learns in implementing complex REST-API test automation solutions

Noemi Ferrera Netease

BIO: Noemi is a software engineer passionate about technology and testing. She has been in and out of testing roles and has always strive for quality, automation and tooling creation to ease the entire development process. She has worked in multinational companies (such IBM, Microsoft and Dell) and also in a startup in Ireland. Currently she has reallocated to China and she is working for Netease games.

Presentation: Using Machine Learning for Test Case Decision
session level: beginner

Running too many tests could be expensive in the agile world. Selecting the right test cases to run has always been a tough task.

In this talk, I intend to explain how machine learning could help us determine test cases to run from a large test suite. I start explaining the problem itself and the variables that we can take into account in order to decide the most representative tests.

We will explore examples of situations of data relationships that would not seem to be obvious for a human but a machine could detect.

Then I explain a bit on machine learning and how it could be applied to this problem. We will look into how much reliable this solution would be and what can we do to implement it, as well as alternatives to this solution.

Last I show an example and open for questions.

Take-aways:

This talk would provide new ways of automating tasks and would inspire the audience to come up with some of these or use them for their projects.
It would teach the audience some basic knowledge about AI, Rule base system and Machine learning.
It would also open discussions about the future of testing and the ethics of relying on computers to do human tasks.

Tomasz Kropiewnicki Royal Bank of Scotland

BIO: I’m a passionate Agile Coach and Delivery Consultant on a mission of discovery. I spend my
days creating high-performance teams and helping organisations reach excellence in
software delivery and portfolio management.
Recently I’ve been busy with helping a global retail Bank with managing the portfolio of their
security services.

I took my first steps in iterative value delivery in 2006 with Feature Driven Development, although I quickly moved to Scrum I never looked back. I hold a deep belief that a purpose of a high performant agile team is to deliver value through sensing and responding,
practices whatever they might be are there to help.

Presentation: It’s not done until it’s gone!
session level: intermediate

Time and time again I see the team or portfolio boards with a magical “Done” column that everyone strives to put their cards in.

Have you considered that when you put the stories, products and services in that column, they are far from “done”? In fact, they have just started they propper lifecycle!

The fallacy of being “done” seems to be a latent consequence of the lingering Project mindset that we have been exposed for years.
It’s time to take the next step and start considering the full lifecycle profits of our hard work.

In this session, I will share my experience and practical tools I use to help organisations adopt optimal portfolio managing techniques, where the delivery is just a part of a broader product/service lifecycle.

Following the Continuous Digital and #NoProjects principles, I’ve introduced a series of tools and techniques that help organisations understand not only understand the full lifecycle but also provides the necessary insights to support the decision-making process.

In essence, we are not done when our work hits real users on production but I would like to state the case that it finishes when it’s gone from any active use.

Take-aways:

I’m hoping to expand your perspective on the Product/service vs Project thinking and Agile Portfolio Management.

Dmitry Lyubarskiy Facebook

7BIO: Software Engineer at Facebook UK working on testing infra for past two years, focusing on scaling running tests in a reliable fashion. Passionate about developer experience, mathematics, algorithms, and scaling systems.

Presentation: Scaling Testing @ Facebook
session level: intrermediate

At Facebook thousands of developers are committing multiple changes everyday. Due to that scale, we use mono-repository approach with very light-weight “feature-branches”: developers typically commit changes in several hours after pull requests. Committed changes are normally getting automatically pushed to production in few hours after commit.

Given the number of people using Facebook, this makes verifying and testing the changes before committing paramount. Our data show that moving testing signal upstream increases bug fix rate and contributes positively to developer efficiency.
At the same time, we have hundreds of thousands of various tests, including many resource-heavy ones. Running a resource-heavy test can require access to a browser, writing data to test DBs, etc. This makes brute-force approach of running all of them on each pull-request impossible.
This talk is dedicated to measures we’ve been taking to move test signal to pre-commit while using feasible amount of resources.

  • Tackling resource problem by automatic grouping of pull requests together.
  • Selecting right tests to be run on pull requests.
  • Analysing test flakiness and preventing false positives.

Take-aways: 

  • It is a good idea to move everything to pre-commit, ie, test while developer is still thinking about changes.
  • Combining pull requests together is a tradeoff between time to signal and resources spent.
  • How combining pull requests is related to the Poisoned Wine Problem.
  • Optimistic and pessimistic strategies of combining pull requests and how it is related to the cost of blame.
  • Different approaches of fighting flakiness

 

Mariusz Gil Source Ministry

BIO: Mariusz Gil is an architect and CTO focused on high complexity and high performance web applications. Trainer, consultant and conference speaker. He has been working for several companies on PHP projects for millions of active users, from biggest social network and instant-messaging software in Poland to multi-billion PV content personalization and discovery platform. Mariusz is also member of 4Developers and PHPcon Programme Commitees and one of core members behind PHPers, open meetups for PHP developers in many cities in Poland. Big-data enthusiast and data-sciencist wannabe. After hours, biker, rock guitarist and landscape photographer.

Presentation: Discovering unknown with Event Storming
session level: intermediate

Event Storming is a lightweight discovery and collaborative learning technique for exploring complex domains and problems. Using business Events, the first-class citizens of modern software development, in very short period of the time we can discover, discuss, model and visualize processes, actors, business rules and related hotspots… Prepared model can be used and extended by software developers, UI/UX designers, testers and product development team, making everyone perspective important. Event Storming is also a communication platform and knowledge transfer tool you may apply to release into production proper developer’s understanding of the problem, not assumptions or speculations.

Take-aways:

  • Core knowledge about Event Storming
  • Methods for engaging everyone in process
  • Real projects use-cases and outcomes from last 3 years of using Event Storming
  • Our extensions to initial Event Storming concepts

 

Mirjana Kolarov Test Department Manager and Test Architect, Levi9

BIO: Mirjana is co-founder of the first testing community in Serbia, called Test’RS Club (testrs.club). After gaining diverse experience in testing she concluded that she needs to give it back to the community, so she started one as a platform for sharing knowledge. During the day she is a Test Architect and Department Manager at Levi Nine. But above all, she is passionate and highly motivated software tester who loves getting her hands dirty with actual testing and leads by example, promoting appropriate testing skills and techniques for 10 years, and counting.

Presentation: Facing the uncertainty by Monitoring Production
session level: intermediate

Have you ever wondered what happens to the code your team developed after it is deployed to the production? Is it a big unknown to you, or do you think you have a clue? I’m sorry to break your illusion, but even if you have idea, it is usually either completely wrong or there is much more than you can think of.
But, how do I know that? I hadn’t for a long time. Only after placing monitoring to our production environment I learned how much I didn’t know, and that my perception was miles away from the reality.
My daily job consists of monitoring our system in production and learning for it. In this talk I’ll explain what added value a tester brings up in monitoring system, and what they can learn by doing this. Some benefits include:
– oracles for our performance tests
– learning about system behaviour
– observing (potential) errors in the systems
– preventing bugs getting to users before we fix them
I’ll show you which tools we use at our projects, how we use them and what can we see by observing them all together, because one tool is never enough

Take aways:

  • Why do testers need to monitor performance of the system?
  • What can a tester find out from those metrics and behaviours?
  • Which tools can be helpful and how to combine them?

Manoj Kumar Kumar Applitools

BIO: Manoj Kumar is a Principal Test Automation Consultant and a Steering committee member of the Selenium Project. He has architected many automated testing solutions using tools in the Selenium ecosystem for both web and mobile apps. He is an open-source enthusiast and has contributed to different libraries such as Selenium, ngWebDriver, Serenity and Protractor. He is also the author of a Selenium blog AssertSelenium. He works at Applitools.

Presentation: Asynchrony: Automated testing with JavaScript Async Await
session level: advanced

Recent developments in the JavaScript world have turned attention toward adopting JavaScript for test automation, thus creating a full stack end-to-end framework based on JS in the development lifecycle. JavaScript is evolving with new versions like ES5, ES6 and Es. Next. This session will show all you need to get started with a JavaScript-based Test Automation setup with Async-Await constructs and without having to deal with Promises and callbacks hell to a good extent.

Take-aways:

  • What is Asynchrony and what it has to do in programming – the uncertainity.
  • How to use Async Await in test automation code (Selenium, Protractor, Appium).

Adrian Stokes Computershare

BIO: Very passionate tester who is world battled tested but retain my enthusiasm for testing and all related disiplines. I’m a part time poet who’s tried my hand at rapping but all in a testing context. Everything I share is in the hope of gaining some feedback to improve or help others improve through sharing. I blog at thebigtesttheory.com where you can find the Periodic Table of Testing which is an emerging visual heuristic of the testing universe designed to help me remember all the things I need to consider in my testing. Its then been further developed to help scoping new project with a Must, Should, Could categrisation.

Presentation: Accessiblility, Assumptions and Arguements
session level: beginner

There is a massive assumption in software develpment that accessibility = disability. I’ll dispel that myth with information, examples and practical tips of how our assumptions are potentially costing us customers, making interactions harder and that the whole population has accessibility ‘issues’ with the applications we are building.

Take aways:

Attendees will learn about accessibility assumptions and how they cloud the small amount of attention we give it.
That accessiblity actually affects approx 90% of all the ‘users’ who visit your site or use your app.
Some common mistakes we make when designing sites and applications.
A few tips the attendees can take away and apply the next day to improve the reach and impact their sites and applications can make

Rick Tracy Hapalion Consulting

BIO: Rick is an avid Test Philosopher, always up for a good debate, discussion or exploration of the many facets of Testing and Software Development in general. He worked 5 years at Rabobank WRR Finance in the Netherlands and now does development, testing, requirements analysis, Agile scrummastering and test coordination for his consultancy company Hapalion and QualityMinds.

When not testing, discussing, or listening at conferences and events, Rick enjoys writing his (one day to be published!) novel, sword fighting and cuddling his outrageously adorable cats. While he has a reputation for always having a story to tell, Rick prefers an interactive lecture or debate to a chalkboard presentation.

Presentation: I Bought a Robot: Remote Working and Telepresence Tech.
Co-speaker: Jacek Tomczak
session level: beginner

I recently started working remotely, 8 hours away from my colleagues. In order to have smooth communication I am on skype, slack, and have multiple digital scrumboards. Despite all of this, it still feels as if I am somehow disconnected from the team, as if being physically present and available for trivial things forms a basis of team culture. As such, I did the only rational thing possible: I bought a robot and decided to live in it.

This telepresence robot rolls around the halls of my new company and at times goes along to clients. When I want to get a cup of coffee or something I roll up to a colleague and invite them to the water cooler. I roll the thing into meetings, attending via robot rather than shaky video conference, and I turn its head when I want to talk to different people. When leaving meetings I make small talk with my colleagues and joke about the topics of the day.

All of this brings a much different experience to working remotely. I no longer feel apart and I am constantly finding new ways to interact, make my robot useful and be more present and available for my team. In this talk I will explore the theories and outcomes behind this experience so far.

Take-aways:

Key learning 1: Skype and video chat doesn’t fulfill the needs of team culture
Key learning 2: There are ways to be somewhere physically without being there in person
Key learning 3: Team interaction is more than important information exchanges
Key learning 4: Telepresence Robots can be used for far more than video chats
Key learning 5: Sometimes the fun move is also the best move

Alexei Vinogradov Vinogradov IT-Beratung

BIO: Alexei has been working in various IT projects in Germany for more than 20 years. He consults about testing and test automation. A Selenide’s developer and poweruser. The founder and moderator of Radio QA podcast.

Workshop: Jump into the KISS UI-Test automation with Selenide (master class)
session level: intermediate

In this session I will not teach you the Selenide API, which is available online, or technicals tricks when working with Selenium-based automation services.

I will demonstrate how I write my UI tests on real projects. The code of the tests will indeed be simple (KISS), but trust me, that simplicity was bought at the high price of many tries and errors.

Alongs with Selenide, you will learn the basic ideas of structuring your code to be ready for Continuious Integration and Cloud Services.

Take aways:

  • learned to start a ui test automation project with Selenide
  • learned how to write simple PageObjects
  • learned how to write readable and maintainable tests

Jakub Kubryński Devskiller

BIO: Jakub is a software developer for whom coding is a way of life as well as a hobby.

He is focused on continuously improving software delivery processes by introducing new technologies and refining Lean methodologies.

For over 12 years of his professional career, he worked as a software developer, architect, team leader, and manager. He gained experience working on both sides of the delivery process, as a vendor and as a client. Today, he is a speaker, trainer, and co-founder of the online technical assessment platform, Devskiller.

Presentation: Infrastructure as code simplified by conventions
session level: beginner

Infrastructure, especially in a cloud-based environment is complex, dynamic and consists of many unique components. If you take a quick look under the hood of any cloud-based infrastructure project, you will find that many of elements involved are tightly intertwined leading to higher maintenance costs and a chaotic environment. Fortunately, using infrastructure as code goes a long way to resolving these issues but if consistent standards are not enforced, you may be left with the same kind of mess you were trying to avoid in the first place. In this presentation, I’ll show how easily you can define your whole infrastructure stack, including all of the myriad relationships between components and the standards that govern them. Implementing this approach will make managing your systems easy to do and scalable.

Take-aways:

  • Infrastructure as code benefits
  • What are the challenges and how to solve them
  • Using convention over configuration approach
  • Most popular tools overview with examples

 

Dana Aonofriesei Trustpilot

BIO: I got in love with software testing and quality assurance 6 years ago and I’m still in love with it and loyal to it by continuously learning and challenging my own biases. During last 6 years I tried different roles and assignments like: Quality Assurance Engineer, Software Analyst, Scrum Master, Head of Quality Assurance and each one of those roles helped me become what I am today: a passionate leader when comes to product quality and an enthusiastic testing & quality evangelist within the company I work for.

Workshop: Test automation strategy cards game
session level: beginner

Rules of the game:
Working groups of max 5 will get a cards deck with Strategy, Challenge and Jocker cards (Jocker can be a test automation solution, a crazy and disruptive idea or a challenge) and a worksheet for writing/drawing their strategy (solutions, concerns)
The game is split in: Round 1, Pitches & Challenges, Round 2, Pitches and winners.
Teams can choose from the deck as many Strategy cards as needed.
Each team member must choose at least 1 card.
Must choose at least 3 Challenge cards and 1 Jocker card.
There are 6 blank cards in the deck: 2 Strategy, 2 Challenge and 2 Jockers. Write on it what you think suitable for your strategy or hand it over to other teams.
In the “Pitches and Challenges” stage, teams must challenge with cards from their deck the strategies presented by other teams.
The working groups should explore the topics mentioned on the cards, engage with other teams or ask the facilitator.
Final score=how many cards were included in their strategy + the facilitator vote. Winners will get funny diplomas and badass badges. Funny test automation stickers for everybody!
Example of a Strategy card=Test Data (generate, create, maintain, delete test data).

Take-aways:

  • learn to shape a test automation strategy
  • discover new solutions, challenges, approaches
  • share your knowledge
  • pitch your strategy
  • win a badass badge

 

Kamila Gawrońska Vattenfall IT Services Poland

BIO: Kamila is an engineer in the blood and bones. For more than 6 years she has worked in various roles – working as a QA, UX Designer and Business Analyst in various domains (construction, automotive, healthcare) and projects.

As a QA – many of her responsibilities are related to building and maintaining test environments. She designs web and mobile applications, takes care of functional and automated tests, works with BDD, programs in Ruby, gather requirements and takes care about usability tests.

Dedicated Agile evangelist and advocate of full-stack employee attitude, represented by developing interdisciplinary skills in a team. She advocates that on the blog at https://leanqa.pl.

Workshop: Cloud Computing for Quality Engineers
session level: intermediate
co-speaker: Wojciech Gawroński

Cloud computing is becoming more and more popular in IT world. Cloud native approach even says that cloud is the new norm. However, is it the same story for the quality engineers?

Public cloud brings a lot to the table – as a typical tool it has advantages and disadvantages. For testers that often seems to be neutral at first, but in the long run, introduces much friction that in the past was not present at all.

Our goal is to show during a workshop what cloud computing can give QA people. In two hours we can start the conversation about how it may help in typical tasks, and we show a few recipes for solving everyday problems. Everything presented with examples and practical exercises in AWS.

Take-aways:

  • Evangelize about cloud computing.
  • Showing how QA/TestOps can leverage public cloud provider based on AWS examples.
  • Expanding horizons and broadening the knowledge when it comes to DevOps cultur.
  • Enabling QAs for being more independent when it comes to managing a test infrastructure in cloud environments.

 

Wojciech Gawroński Pattern Match

BIO: After tackling significant scaling and performance challenges in the eLearning, eCommerce, public transport, analytics fields and the hyper performant world of real-time bidding (RTB), Wojtek chooses to be an independent IT consultant.

All of that work strongly connected with attention to the details and software quality. Wojtek’s code helped to power a multi-billion transaction platform, distributed across the globe. He firmly believes in Cloud Computing and DevOps culture – he transformed several companies into those areas. He is not afraid to change hats when there is a need for it, as he firmly believes in full-stack employee attitude.

In the spare time, he is a speaker on various IT-related meet-ups and conferences, co-organizing meetups in Silesian region, blogging at http://afronski.pl and reading many books.

Workshop: Cloud Computing for Quality Engineers
Co-speaker: Kamila Gawrońska
session level: intermediate

Cloud computing is becoming more and more popular in IT world. Cloud native approach even says that cloud is the new norm. However, is it the same story for the quality engineers?

Public cloud brings a lot to the table – as a typical tool it has advantages and disadvantages. For testers that often seems to be neutral at first, but in the long run, introduces much friction that in the past was not present at all.

Our goal is to show during a workshop what cloud computing can give QA people. In two hours we can start the conversation about how it may help in typical tasks, and we show a few recipes for solving everyday problems. Everything presented with examples and practical exercises in AWS.

Take-aways:

    • Evangelize about cloud computing.
    • Showing how QA/TestOps can leverage public cloud provider based on AWS examples.
    • Expanding horizons and broadening the knowledge when it comes to DevOps cultur.
    • Enabling QAs for being more independent when it comes to managing a test infrastructure in cloud environments.

 

Victor Slavchev Siteground

BIO: My profession is software testing and by that I don’t mean mindless clicking on UI elements, nor comparing result to predefined expected states. When I talk about testing or perform testing or teach testing I always think of it as a scientific activity, process of evaluation of quality, exploration, of questioning, modeling, experimentation, risk assessment and gathering of information in general. In other words, I take software testing very, very seriously!
I come from a non-technical background – linguistics and I am very happy about it, since it provides me with a unique perspective and a lot of diverse experience which is always something that is beneficial in software testing.
In my previous experience as a software tester I was involved in many different projects related to mobile testing, testing of software products in the telco area, integration testing, test automation (even though I prefer the term “tool assisted testing”).
In general I am interested not only in the technical, but also in the scientific part of testing and its relation to other sciences like epistemology, system thinking, logic, problem solving, psychology and sociology.

Presentation: Automation vs. intelligence – “come with me, if you want to live”
session level: beginner

Have you ever heard the story that your job is automatable, that all the human testers will be replaced by machines or automated tests and you will lose your job? Or even worse, that machines and artificial intelligence will take over our craft and our life and we will be totally useless. Do you buy these? Are you afraid?
“Come with me, if you want to live” – this was the famous line that many members of the Human resistance in the Terminator franchise used, when offering their help in the war against Skynet.
So, come with me (and John Connor), and join the testing resistance to fight on the side of intellect against the evil machine army. I am willing to challenge the I part in AI on contest by focusing on few key topics:
If we were really “at war” for productivity and capabilities against machines, do we really have a chance?

  • Do we know what are the benefits of human testing? What are human testers irreplaceable for?
  • Is expert work just a set of procedures we can codify? Action vs. behavior.
  • Can we translate testing into machine language? Polimorphic and mimeomorphic actions – what are these?
  • Empirical evidence for expert testing systems or just myths?

Take-aways:

  • Practical view comparing human intelligence to machine one.
  • Realistic view of the abilities human tester has, what makes them unique and untranslatable to a machine.
  • Practical advice how to promote and develop skills that make us stand out, even when compared with machines.

 

Thomas Sundberg Think Code AB

BIO: With more than 25 years in software development, Thomas is an independent consultant based in Stockholm, Sweden.

He has a Masters degree in Computer Science from the Royal Institute of Technology (KTH), Sweden’s leading technical university. After graduation, Thomas also taught at KTH.

Thomas currently teaches Behaviour-Driven Development, BDD, with Aslak Hellesoy, the creator of Cucumber. Thomas has commit privileges on the open-source Cucumber project, and works in partnership with Cucumber Ltd. as well as Mozaic Works.

As a consultant, trainer, and developer Thomas has created value for many teams around Europe. For the last ten years, Thomas has been an invited conference speaker at GeeCON, I T.A.K.E. Unconference, and European Testing Conference on topics including software craftsmanship, clean code, test automation, and continuous deployment.

His blog at http://www.thinkcode.se/blog shares his obsessions with technical excellence, Test-Driven Development, TDD, and BDD.

Thomas tweets as @thomassundberg.

Workshop: Example mapping
session level: intermediate

Transforming an idea into concrete examples is hard.

In this workshop you will be exploring a low fidelity technique called Example Mapping. It involves an idea for a new feature, possible as described as a user story, pen, paper and a conversation.

The result will be concrete examples that describes the wanted behaviour of the system your customer user. You will write better scenarios for you automated acceptance tests after this session.

Take-aways:

  • Learn how to use example mapping for exploring development
  • Learn how to facilitate a Three Amigos Meeting

 

Vera Gehlen-Baum QualityMinds

BIO: I finished my PhD in ‘Learning with new media’ in 2015 and started as a Requirements Engineer at QualityMinds right after. My first project was to test a medical software and to improve the whole testing process – starting from the requirements. In this and other ongoing projects, I can facilitate several of my passions: combining well-researched learning theories with requirements and testing.

Presentation: Growing your magical creatures – include learning in your backlog
co-speaker: Beren Van Daele
session level: beginner

When we talk about Scrum-teams and their interdisciplinary performance, it seems like many of us, have this team of experts in mind, which are theoretically able to fulfill every role and could do every job in the team. Unicorns, dragons, chimeras, wizards,… we call them.
From our point of view- these team members sound a lot like Nessi or the Yeti – they may very well exist – it’s just that we have never seen them.
We often have a good mix in our team – some members are more junior than senior, which struggle with the software, the framework, themselves, additional tools,… others have specialities in one thing and only shallow understanding in others – and that is ok!

Currently we are working in a test automation project that struggles with multiple steep learning curves as we tackle understanding the product under test, building up our skillset and learning to function as a team.
Given our peculiar context we’ve experimented applying Vera’s deep understanding of the theory of learning and putting it in practice in a team consisting of real people, real challenges and real learning needs resulting in an honest and valuable experience report.

Take-aways:

We’d like to share some insights on how to train people within the project to make the group more homogenous in terms of knowledge sharing and teaching each other. Therefore we ( in the role of the Product Owner and ScrumMaster) include learning and especially learning goals into the backlog and apply the SCRUM methods to them.

Beren Van Daele Software Tester

BIO: Belgian – Co-Creator of TestSphere, Organiser of BREWT & Freelance Consultant.

I’m a Freelance Consultant from Belgium who shapes teams to improve on their work and deliver quality software.
Part of my time is spent working remotely as a Product Owner. The other part I’m travelling Europe, giving workshops on RiskStorming and speaking at conferences, companies and meetups.

Presentation: Growing your magical creatures – include learning in your backlog
co-speaker: Vera Gehlen-Baum
session level: beginner

When we talk about Scrum-teams and their interdisciplinary performance, it seems like many of us, have this team of experts in mind, which are theoretically able to fulfill every role and could do every job in the team. Unicorns, dragons, chimeras, wizards,… we call them.
From our point of view- these team members sound a lot like Nessi or the Yeti – they may very well exist – it’s just that we have never seen them.
We often have a good mix in our team – some members are more junior than senior, which struggle with the software, the framework, themselves, additional tools,… others have specialities in one thing and only shallow understanding in others – and that is ok!

Currently we are working in a test automation project that struggles with multiple steep learning curves as we tackle understanding the product under test, building up our skillset and learning to function as a team.
Given our peculiar context we’ve experimented applying Vera’s deep understanding of the theory of learning and putting it in practice in a team consisting of real people, real challenges and real learning needs resulting in an honest and valuable experience report.

Take-aways:

We’d like to share some insights on how to train people within the project to make the group more homogenous in terms of knowledge sharing and teaching each other. Therefore we ( in the role of the Product Owner and ScrumMaster) include learning and especially learning goals into the backlog and apply the SCRUM methods to them.

Tomasz Klepacki JIT Solutions

BIO: Tomasz Klepacki – Test Lead oraz Test Architect w JIT Solutions. Tester z ponad 7-letnim doświadczeniem. Od 5 lat specjalista się w projektowaniu, rozwoju i utrzymaniu testów automatycznych dla aplikacji webowych oraz testów wydajnościowych. Doświadczenie w obszarze testowania zdobywał min w projektach w branży ubezpieczeniowej, morskiej, zarządzania informacją. Obecnie zajmuje się projektowaniem strategii, infrastruktury oraz dostarczaniem frameworków testowych dla działu E-Commerce LPP SA. Entuzjasta testowania automatycznego, nowych technologii, a ostatnio tematyki TestOps. Prelegent na lokalnych meetup’ach testerskich oraz trener. W wolnych chwilach pasjonat gry na gitarze oraz dobrego kina.

Workshop: Selenium Grid, Docker, Zalenium i Jenkins, czyli jak od zera zbudować infrastrukturę testową dla aplikacji webowych.
session level: intermediate

1. Minimum teorii:
– test przeglądarkowe – problemy i wyzwania
– podejscia w stawianiu infrastruktury testowej dla testów Selenium (plusy i minusy): a) Saas – gotowa infrastruktura w chmurze. b) Samodzielnie postawienie infrastruktury za pomocą VM – real-project example. c) Selenium GRID
2. Czym jest Docker – podstawowe operacje
3. Postawienie Jenkinsa w dockerze
4. CI Pipeline – konfiguracja joba oraz uruchomienie testów
5. Sposoby paralelizacji testów z użyciem TestNG oraz Maven Surefire Plugin – uruchomienie testów w sposób rownoległy na kilku przeglądarkach
6. Postawienie infrastruktury testowej w oparciu Selenium Grid i Dockera przy pomocy oficjalnych dockerowych obrazów dostarczane przez SeleniumHQ i uruchomienie testów
7. Finalnie – uruchomienie testow z Jenkinsa w sposób równoległy na środowisku testowym opartym na Zalenium

Take-aways:

  • Uczestnik dowie się w jaki sposób lokalnie stworzyć instancję Jenkinsa w kontenerze
  • Uczestnik dowie się w jaki sposób skonfigurować pipeline’y w Jenkinsie
  • Uczestnik dowie się w jaki sposób równolegle uruchamiać testy na kilku przeglądarkach
  • Uczestnik dowie się w jaki sposób postawić infrastrukturę testową opartą o Selenium Grid i Dockera używając oficjalnych dockerowych obrazów dostarczanych przez SeleniumHQ
  • Uczestnik dowie się w jaki sposób postawić infrastrukturę testową opartą o Zalenium
  • Finalnie uczestnik nauczy się w jaki sposób uruchomic testy Selenium w sposób rownoległy z Jenkins Pipeline w infratrukturze testowej Zalenium

 

Szymon Ramczykowski Lead Test Engineer, Kainos

BIO: I have been a Tester since 2009. Being involved in various projects, my main interests have been automation and improvements in software development processes. Through years of experience, I evolved from a bug hunter to bug preventer. Within my current role I am focused on ensuring that automated tests are giving right benefits to the organization. After hours, I am a happy father, husband, traveler and guitar player.

Presentation: Scalability of good practises: How to deliver a complex product with 6 scrum teams and not go insane.
session level: intermediate

While the business is running and product expanding, the development team is growing. Startup crew with 1 scrum team transforms into mature organisation with 6 scrum teams and over 70 people involved in the development process. Testing is in the middle of this from the very beginning. In startup culture there is often no time for doing automation, all is tested manually. This implicates so called testing debt, which might be difficult to address when company is growing. There are plenty of good practices, that helps team to tackle such issues, but implementing those for larger organisation, working on one product, might be tricky.

During this presentation we would like to show the journey of our team, where on the beginning, testing was time consuming, repeatable, manual job designated only for test engineers, to the place where testing is key creative activity done by all team members.

Take-aways:

  • How Test Engineers role evolved
  • Tools and techniques used depending on context
  • Finding best possible combination of business value and people needs
  • Empowering people to do improvements by their own

 

Marcin Grzejszczak Pivotal

BIO: Author of “Mockito Instant” and “Mockito Cookbook” books. Co-author of Applied Continuous Delivery Live Lessons. Co-founder of the Warsaw Groovy User Group and Warsaw Cloud Native Meetup.

Lead of Spring Cloud Sleuth, Spring Cloud Contract and Spring Cloud Pipelines projects at Pivotal.

Presentation: Building Resilient Microservices
co-speaker: Olga Maciaszek-Sharma
session level: intermediate

Distributed systems and microservices are currently one of the strongest trends in the field of enterprise-scale systems development. The main reason for that is that given the size and throughput of modern systems, massive application scalability has become a core requirement. In order to achieve scalability, the system has to allow for easy partitioning and concurrency, which are problematic with the traditional monolithic applications. Also, as microservices provide much greater independence to the teams both in terms of development and deployments, they let us introduce changes and add new features much more quickly and easily, in keeping with the Agile principles. With all that, microservices seem to be the way to go.

Nevertheless, we can’t forget that as we switch to distributed architectures, the complexity of our systems grows and the communication between them becomes a first class citizen of our applications, and as the individual services become easier to develop, testing and ensuring reliability on the system-wide level can become much more difficult. It can decrease the stability of our applications and cause production issues.

However, there are ways in which we can greatly reduce the risks associated with working with distributed architectures and there are tools that can help us greatly increase system stability and make our microservice-based solution much more resilient. In this presentation, we will talk about some such approaches, including consumer-driven contracts, distributed tracing and automated deployments and some tools provided by Spring Cloud to make incorporating these principles easy, along with demos and live coding.

Take-aways:
Learning about:

  • microservices-based architecture
  • independent development and deployments
  • maintaining reliability, testing and monitoring

 

Olga Maciaszek-Sharma Devskiller

BIO: Olga Maciaszek-Sharma is a Java and Groovy Developer at Devskiller. She has gained her experience while working with microservices where cutting-edge solutions were used as well as with complex legacy systems, implementing both new business features and solutions aimed at improving the process of continuous deployment and setup of applications. Olga is also a contributor of the OSS projects: Spring Cloud Contract (former: Accurest), JFairy, Jenkins Pipeline Plugin, Jenkins Stash Pull Request Builder Plugin, and others. Before switching to development, she worked for more than 3 years as Quality Assurance Engineer, specialized in test automation.

Presentation: Building Resilient Microservices
co-speaker: Marcin Grzejszczak
session level: intermediate

Distributed systems and microservices are currently one of the strongest trends in the field of enterprise-scale systems development. The main reason for that is that given the size and throughput of modern systems, massive application scalability has become a core requirement. In order to achieve scalability, the system has to allow for easy partitioning and concurrency, which are problematic with the traditional monolithic applications. Also, as microservices provide much greater independence to the teams both in terms of development and deployments, they let us introduce changes and add new features much more quickly and easily, in keeping with the Agile principles. With all that, microservices seem to be the way to go.

Nevertheless, we can’t forget that as we switch to distributed architectures, the complexity of our systems grows and the communication between them becomes a first class citizen of our applications, and as the individual services become easier to develop, testing and ensuring reliability on the system-wide level can become much more difficult. It can decrease the stability of our applications and cause production issues.

However, there are ways in which we can greatly reduce the risks associated with working with distributed architectures and there are tools that can help us greatly increase system stability and make our microservice-based solution much more resilient. In this presentation, we will talk about some such approaches, including consumer-driven contracts, distributed tracing and automated deployments and some tools provided by Spring Cloud to make incorporating these principles easy, along with demos and live coding.

Take-aways:
Learning about:

  • microservices-based architecture
  • independent development and deployments
  • maintaining reliability, testing and monitoring

 

Michał Krzyżanowski Cognifide / AutomatingGuy

BIO: An experienced Senior QA Engineer. Test automation specialist, trainer, and evangelist. Technical testing, CI/CD, and Test/DevOps enthusiast. A QA Lead responsible for quality-related aspects throughout the project lifecycle, now also a consultant helping others with automating their work. He regularly searches how to improve even the good solutions.
Embraces his love of discussing things as one of the DebatQA co-hosts. Speaker at meetups and conferences. Blogger at automatingguy.com.

Presentation: Evolving as a QA. Do you still care about quality?
session level: intermediate

Industry and technology evolves more and more dynamically. So does the role of Quality Assurance. Do we even still name it like that in the age of DevOps, microservices, ML, AI, serverless, etc.? Who are we nowadays? What exactly is our role?
Are we still the focusing on delivering the best possible software or do we prefer chasing all the buzzwords, new technologies, and methodologies industry throw at us?

In my talk, I would like to tell you a story of a guy who went from a total greenhorn in IT, to a guy leading in a projects for clients from the Fortune 500 list – my story.
It is going to be a bit personal, a bit controversial but hopefully in the end, also optimistic.
I want to show you how I evolved as a tester, QA and most importantly, an engineer. I want to share what parts of the surrounding changes I happily embrace and what parts I hate and I am sad to see.

Based on my observations on the industry, many interviews, countless discussions during conferences and meetups, and of course my personal experience I would like to explore the evolution of QA role.

Take-aways:

  • why you should not care about labels put on different roles
  • how do you combine innovation and the urge of experimentation with a tight budget and schedule
  • how do you satisfy the client who still lives in the previous era, while using modern approaches and keeping it enjoyable for both you and your team
  • why the often forgotten basics are still needed, regardless of whatever fancy methodology you work in or flashy new technology you implement
  • why focusing on people is more crucial than on KPIs, metrics
  • why psychology, empathy, and ethics are as critical as the arsenal of tools, libraries, and techniques you might have
  • why it is ok to admit as a quality-focused person that ‘good enough is perfect’

 

Karol Szewczak Lufthansa Systems Poland

BIO: I’m a passionate tester (twitter addicted) continuously looking how to increase my knowledge and skills and how to share with others what I’ve learned already. I started my career in testing a decade ago in telecommunication company and have been spending time since then on learning, learning and once again learning. Right now I’m test architect looking for a way to evolve testing at our company, to move it to next level and remove obstacles hindering daily work of my colleagues.

Presentation: Shift monitoring left
session level: intermediate

“Migrating a monolithic system to an ecosystem of microservices is an epic journey” and as before each journey, you should prepare yourself. But what does it really mean for us and our teams? Change is coming, bringing more questions than answers and uncertainty that should be clarified before landing safely on production rather than hitting hard the wall. Is our test strategy capable of handling new reality? Without the safety net of extended testing phases how can we evaluate the product, how can we support answering questions like: are we done? Is the product ready? We are trying to shift left testing and on the other hand test on production, but are we ready with our product and tools to be on production? Is our monitoring strategy verified? Have we tested it? By shifting monitoring left and adding it to test strategy we can gain confidence not only in monitoring itself but also in the product – you gain continuous feedback on your analysis and test scenarios created earlier as well.

Take-aways:

Participants will be encouraged to include continuous monitoring to their daily routines and will gain confidence in exploring new areas outside of their comfort zones. They will learn that DevOps is also for testers and they can play a significant role there. That metrics, monitoring shall not be left to the end of the SDLC and whole team can benefit from early adoption of continuous monitoring. They will get a brief overview of existing tools they can use and that there is nothing to be afraid of. They will see the power of visualization – how simple dashboard can spread information about current state of environment/product and how different stakeholders react to it.

Jevgeni Demidov Pipedrive

BIO: Jevgeni has been in the software development industry for seven years now. During his journey in this field, he managed to participate in a lot of different software testing activities, from organizing a software testing process from scratch and ending with a development of utilities that help to improve the product’s quality.

He started as a Quality Assurance Engineer back in 2011, where he was working on mobile devices and software solutions for police, courts, and other governmental structures. Later gradually and for a long time, he moved into the position of Quality Assurance Manager where he began to develop in the field of management. Currently, with somewhat understanding of the meaning of both “Test” and “Management” words, he decided to continue his career as a Software Development Engineer In Test. In this position, he has an opportunity to get more experience in all parts of product creation, including software development itself and applying his test expertise in both software related development and testing activities.

To all of the above, during his journey in Quality Assurance area, he always loved analyzing the non-standard situations, identifying what caused the problems and applying new approaches to solve them.

Presentation: How-To Guide: Statistics Based on the Test Data
session level: beginner

Each of us has a project: your favorite, dear, to whom you wish to grow and prosper. So, we are writing many manual tests, automating the repetitive actions, reporting hundreds of issues in Jira or any other bug management tool, and as a result, we generate a lot of data that we do not use. However, how do you assess your project’s prosperity, if there are no criteria for this very prosperity?

How can you react quickly to problems before they become incorrigible, if you are not gathering any information that can give you a hint that something goes wrong?

How do you understand what should be improved, if you don’t know that problems even exist in your project?

I have an answer: “Statistics!” Yes, when you hear this word in the context of testing, you might have thought that this is much better applies to sales or any other marketing field, but definitely not related to the testing process itself. That’s why, instead of formulas and a list of metrics, I will tell you about my experience of collecting and analyzing statistics – and the results that I have achieved after I started using them.

Take-aways:

Statistics is needed to effectively manage the project: diagnose problems, localize them, correct and verify whether the methods of solving the problem that you have chosen helped you. The goal is to extract the key values and present them into a compact, straightforward way.

During the presentation I will provide the following information:

  •  why test statistics gathering is important
  •  how and where to collect the statistics
  •  what value the test results can bring into your daily workflow
  •  how to make decisions based on the information you can get from the test execution statistics
  •  how to find a root cause of failures and solve testing-related problems samples of stats you can start gathering right now

 

Thierry de Pauw ThinkingLabs

BIO: Thierry is a Continuous Delivery coach and Lean and XP Software Engineer with a high affinity for operations.

He is a jack-of-all-trades with a passion to help teams create meaningful software, having a keen eye for code quality and the software delivery process, from customer interaction to continuous delivery. Instead of balancing quality & delivery, he believes and practices that better quality is actually a way to more and better deliveries.

Thierry is founder of ThinkingLabs, a consultancy around Continuous Integration and Continuous Delivery.

Presentation: Feature Branching considered Evil
session level: intermediate

Feature branching is again gaining in popularity due to the rise of distributed version control systems. Although branch creation has become very easy, it comes with a certain cost. Long living branches break the flow of the software delivery process, impacting throughput and stability.

This session explores why teams are using feature branches, what problems are introduced by using them and what techniques exist to avoid them altogether. It explores exactly what’s evil about feature branches, which is not necessarily the problems they introduce – but rather, the real reasons why teams are using them.

After this session, you’ll understand a different branching strategy and how it relates to continuous integration.

The target audience is anyone using version control systems in a Continuous Integration or Continuous Delivery context.

Take-aways:

  • understand why teams are using feature branching
  • explain why feature branching is problematic
  • describe alternatives to feature branching
  • run an experiment with trunk-based development
  • understand if all teams can adopt trunk-based development

 

Konrad Marszałek Spartez

BIO: I have 10-years of experience in software quality assurance divided by 2 cities (Kraków and Gdańsk) and 3 companies. I had pleasure to work for successful startup, middle-sized company with cloud product, and large company with product used by 30k customers. I like to implement simple solutions that make a difference. My motto is “Make work productive and enjoyable for myself and others, especially by means of automation.”

Workshop: Kickstart your performance testing
session level: beginner

Do you test performance when your team develops new feature? Why not? Performance testing is often found as difficult and omitted by testers.

I’d like to take you on a pragmatic trip through web application performance testing. We’ll start with low-effort activities taking rough assumptions to get immediate results. Then we’ll learn what drawbacks and trade-offs we’ve made and try to improve accuracy of our measurements.

We’ll exercise various scenarios so that you can grasp broad, holistic approach to the topic:

  • Performance checks during exploratory testing (e.g. Fiddler, Charles proxy, YSlow, Chrome dev tools)
  • Load generation (e.g. JMeter, Gatling)
  • Application Performance Management solutions (e.g. New Relic)
  • Utilising staging/dogfooding/demo environments to learn about performance
  • Monitoring tools (e.g. ELK, Splunk, Grafana, Graphite)
  • Data volume testing
  • Using Selenium to get client side performance metrics

Given 2 hours I’ll not go into details of above aspects. I’ll cut theory to bare minimum. I’ll concentrate on specific, hands-on examples showing value of given approach.
After each example you’ll be tasked to execute similar exercise during our workshop.

Take-aways:

  • Convince that performance testing is not a secret knowledge for chosen ones.
  • Learn variety of attempts to performance testing.
  • Learn portfolio of tools that will help you test performance.

 

Lina Zubyte ThoughtWorks

BIO: Lina Zubyte is a passionate Quality Enthusiast who loves to ask questions, test, collaborate with diverse departments and investigate issues. Lina has worked in companies of different sizes (large multi-national companies and a startup), moved countries for work and had to adapt quickly to getting out of comfort zone. Favorite parts of being a quality professional for Lina are: diving deep to complex issues which may even reveal design or algorithm flaws, using monitoring tools and analytics data to understand the impact of found issues and collaborating with the team to build a high quality product.
In her free time, Lina loves traveling and discussions with inspiring people.

Presentation: It’s Tricky: Chatbots & QA
session level: beginner

In 2016, as part of a survey run by Oracle regarding tech trends, over 80% of businesses answered that they already have or will implement a chatbot by 2020. A few years passed, the craze for chatbots is not as sky-rocketing, but the trend is still there: more and more businesses are experimenting with chatbots, especially, when it comes to customer-facing services. However, with chatbots comes a huge amount of uncertainty.

Working with a chatbot as a QA, I felt like the traditional methods of testing do not apply as well in this new area. It made me do a lot of research, experiment, and, even build my own chatbot to explore the topic more. In this session, I will share my learnings on chatbots and discuss these questions: What should we think about when we build chatbots? How can we as QAs help to ensure the quality of a chatbot? What in the end is a good quality chatbot? Join me in my talk and get ready for the future of chatbots.

Take-aways:

  • Chatbots being still a new growing area, there is little known about the quality metrics for conversations.
  • Validation of machine learning for chatbots currently is very challenging because humans are challenging in their conversations.
  • As QAs we need to learn to let go of control and adapt to uncertainty in order to reach quality in chatbots.

 

Michał Płachta Reality Games

BIO: I am a polyglot software engineer specialised in developing distributed applications. Also a tea drinker, cyclist & functional programming enthusiast. I love the human component in software projects. I currently work as team leader at Reality Games, where we are building games based on big data.

Presentation: Building testable APIs using functions & meshes.
session level: beginner

In this live coding talk you will learn about functional techniques that encourage separation of concerns. I will build a stateful HTTP API from highly isolated components that are easier to test than entangled spaghetti-like codebases. I will show you how tests can help us scaffold the architecture and how should we approach testing in the era of microservices. In the session I will use immutability, type parameters and function parameters as tools to implement a very practical example: Pac-Man game web server. I will code in Scala using Akka HTTP and deploy to Kubernetes with Istio service mesh.

Take-aways:

How to:

  •  use immutability,
  • make HTTP tests stable,
  • perform separation of concerns using abstraction,
  • test on production in isolation using sidecars in service meshes.

 

Iza Goździeniak Allegro

BIO: I’m Lean & Lead Agile Coach at Allegro. Satisfied users, happy clients and motivated teams who know why and who they do things for are the essence of my work. I help teams cooperate effectively using proper lean and agile practices and find the best ways to achieve their goals. For over 10 years I have been engaged in E-commerce. I have experience in working with product, services, operations and infrastructure teams and start-ups. I also keep a Product Owner perspective as I used to be one of a start-up.

Workshop: The art of improving
session level: interm & advanced

Inspect and adapt are the core pillars of Scrum and other agile methods of work. It’s easy to say, but more difficult to implement. Many teams have regular sprint retrospectives, yet they don’t improve or improve very slowly. They establish new actions during each retro, but don’t change their behaviour during sprint. Maybe it’s time to change attitude and use different tools to make good things happen. During workshops you will discover what supports deep change and improvement and why it’s not that easy to change.

Take-aways:

Focusing on symptoms leads to difficulties with improving and big steps make it even worse.