Battling developer shortage with educational cooperation
Without today’s juniors there wouldn’t be tomorrow’s seniors. This very much holds true within any specialist field, but it still feels like the truth about healthy workforce structure has been forgotten in today’s knowledge work.
As a company that cooperates a lot with public sector organizations, it must be said that sometimes it’s exceedingly difficult to get newcomers to the software development field actually hired into projects. Project teams are picked based on CVs in close fought competitive tendering with winning bids invariably going to teams made up of 10 year veterans with black belts in coding. You can’t get beginners hired to gather valuable experience even if their fees were next to nothing.
The board of the Code from Finland association, lead by their chairman Janne Kalliola, laudably pointed out in their opinion piece in Helsingin Sanomat that strict competitive tendering criteria are exacerbating the developer shortage and are holding back the growth of the industry.
Even though experience is valuable and valued, the programming industry needs new skilled workers now more than ever. The way we are trying to affect the industry is through educational cooperation and by hiring junior software developers ourselves.
Cooperation with the Helsinki Business College
A versatile content management system like Drupal, which is aimed at large clients, requires a constant source of experts. Fresh software developers are often enthusiastic open source contributors, and their careers will benefit from Drupal’s large international community. Being an active open source developer could well act as a stepping stone to an international career in programming.
In Finland, Drupal programming education is offered by our longstanding partner, the Helsinki Business College. Our collaboration during the years has been varied: we’ve taught in their courses, coached teachers in the use of Drupal, organized company visits and taken in interns whenever possible. When things work out, we are happy to offer permanent jobs to the interns. Many HBC students have graduated to working life through us.
In January of 2022 we will take the next step in our collaboration as we become the cooperation partner for the React & Drupal Full Stack Web Developer study program at HBC. We want to take active part in familiarizing students with the everyday life within the software industry, as well as the Drupal community and naturally with us as a company.
As a partner we will help HBC to specify the practice project carried out during the studies, and I will work as the project’s product owner for the development teams. At the end of the study period we hope to snag as many skilled developers as possible as interns, which will hopefully lead to them becoming gainfully employed developers in the industry.
Maintenance work provides all-round experience
Even though the public sector’s attitude towards novice developers is hardly enthusiastic, competitive tendering shenanigans won’t be the end of anyone’s career. The comprehensive maintenance and small scale development services we offer are of key importance when it comes to getting junior coders employed. Most of our interns will find themselves under the care of our Magical Support team and will later be employed as part of the team.
We at Druid don’t consider the maintenance services we provide as a necessary evil but rather a significant source of our revenue. Over 95% of our clients move to maintenance at the end of an implementation project, and the amount of work done under the maintenance contract is often manifold compared to the original project.
Our multilingual and multi-skilled team is responsible for ensuring that our clients’ websites remain functional, secure, and up-to-date in terms of features. The technical skills required in these tasks range from the comprehensive knowledge of large Drupal systems to creating modern JavaScript and React code. Is there a better place for a newcomer to the industry to get a feel of the diversity of working life?
We at Druid have helped bring to life a number of success stories whose main star has been a motivated fledgling software developer. Those are the stories we yearn for. That’s why we want to offer a solid stepping stone to working life to as many people as possible.
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
Laurie Lim Sam
Druid’s ongoing partnership with Business College Helsinki
Discover Druid's ongoing partnership with BCH Fullstack Development Program. Learn how we support new developers through mentorship, real projects, and internship opportunities.
30.06.2021
Druid Tech Survey 2021: the technologies we use and like
At Druid we’ve decided to establish a tradition to keep a record of the technologies we use in our projects and to take note of the technologies we would like to use more in our work.
It will be inspiring to review statistics in the coming years and see what kinds of turns we have taken along the way in terms of technology stack choices on our quest to create top quality digital solutions. And it’s always interesting to look back on history to check what has become obsolete, what tech we have chosen instead and why.
We started this spring by conducting our very first tech survey. We asked our software developers about the technologies they are currently using and the trends and languages they’re enthusiastic about. The answers gave us a deeper insight into the range of technologies we’re using and what we think will stick with us in the foreseeable future.
Technologies we use now
Since the company’s foundation in 2012, Drupal has been the main technology we’ve relied on in our projects. It is still the case, as almost 80% of our developers are engaged in Drupal-based projects. Around a third of the developers build digital solutions using the Symfony framework.
When it comes to frontend, state-of-the-art Drupal frontend solutions based on Vanilla JS and jQuery, as well as building custom themes and templates with Twig, JS and CSS, are in heavy use currently in Drupal.
However, we see the growing importance of JavaScript frameworks and already use some of them whenever they fit the project goals. More than 65% of our developers work with JS based technologies. A quarter of our frontenders are currently involved in building and supporting the React Apps. Less often we use Vue and Angular.
To summarize, the most popular languages and frameworks we use today are PHP, Drupal, JavaScript, Symfony, TypeScript, React, Vue. But of course, regardless of the tech we use in our projects, the focus always lies on security, scalability, usability, accessibility, and high performance.
Technologies we would like to use – and to avoid
When it comes to personal preferences, we see a surging interest in JavaScript, both for backend and frontend parts.
Backend frameworks and languages
In backend development, some of us still see themselves doing projects with Drupal and Symfony. However, the number of people who would like to switch to JS based backend and serverless architecture has grown significantly, while interest in Drupal is slowly going down. Both serverless and headless architecture were mentioned in the survey as promising directions. API based solutions and ecommerce have also gained steady interest.
Frontend frameworks and languages
As to frontend development, the role of JS technologies is definitely growing in our current projects. React, Vue, and Angular have already become part of our business solutions. This is also reflected in the survey results. Interest in leading-edge React and Vue is strong among our frontenders while Angular has lost its former popularity. The survey also demonstrates a rising interest in TypeScript.
Things to avoid
The majority of our developers answered that they are quite flexible, quick to learn and ready to work with whatever suits the project best. But not surprisingly, they would rather avoid any obsolete technologies. Some are not enthusiastic about working with Drupal 7 anymore as it’s reaching end-of-life soon and a much better, up-to-date Drupal 9 version is already in use – and the upcoming release of Drupal 10 isn’t far away either.
Open source, open attitude
At Druid we want to make the digital world more functional with every line of our code – with open source and an open attitude, as our motto goes.
According to the survey, more than half of our developers are members of online developer communities, and some of them contribute to Drupal or at least are active members of the Drupal community. In addition, half of our developers said they participate in meetups, hackathons or webinars.
We have always been focusing on building reliable code and high-quality, long-lasting digital solutions that add value to the customer’s business. The right choice of technologies is necessary to deliver a good product – but technology as such is never the priority, customer needs are. And those needs guide everything we do.
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
Laurie Lim Sam
Druid’s ongoing partnership with Business College Helsinki
Discover Druid's ongoing partnership with BCH Fullstack Development Program. Learn how we support new developers through mentorship, real projects, and internship opportunities.
25.05.2021
Acquia Certification – the benefits of Drupal developer’sultimate test
During the last year several Druids (myself included) have gotten certified by Acquia – sponsored by Druid, of course. Acquia Certification is the professional certification program for Drupal developers. Being a current benchmark in the technology, it verifies that the developer meets the standard and has an extensive expertise in the field.
I interviewed our recently certified developers, Sebastian, Markus,Robert, and Simo about their thoughts on the certification programs – Acquia Certified Developer and Acquia Certified Front End Specialist.
The Acquia Certified Developer certification is considered to be a more general exam that validates skills in the areas of Fundamental Web Concepts, Site Building, Front End Development (theming), and Back End Development (coding). The latter is more oriented on front-end technologies and the Drupal principles in this area.
Is the certification worth it?
If you ask us, the answers is yes, it’s definitely worth the effort. Everyone found the certification useful both for extending the knowledge and for demonstrating the level of expertise to customers and employers alike. Today, it’s very important to stand out in a competitive marketplace. In my view, this certification is an easy way to verify that your knowledge matches a certain standard.
“Now I’ve got a better understanding of my stronger and weaker points,” Robert said. “This exam became a good opportunity to get the broadest view of the technology and also to identify areas where I can improve.”
Simo added: “I’ve been working with Drupal for a very long time starting with Drupal 5, then 6, 7, 8, and now Drupal 9. The preparation for the exam helped me to check out the current best practices and to get away from the old ways of writing code used in earlier versions of Drupal.”
“This exam was more about the verification of what I actually know,” Markus said. “But it was a good experience.” Sebastian concluded that it was a good opportunity to demonstrate our proficiency to our customers. Indeed, some of our customers highly value this kind of confirmation about the level of Druid developers.
About the exam questions
The questions were based on real work experience and thus they were relevant to daily work, which everyone thought was great. Working with a wide range of projects, you bump into different kinds of problems and should quickly come up with solutions. If you’ve solved some problem before, you can easily find the correct answer in the exam. If not, it’s a good opportunity to learn more about the subject so you know how to approach the problem when you face it.
“There were some tricky questions at first glance, but if you’re to be qualified as an expert in the field, you should know the precise answer to it,” commented Sebastian.
Simo pointed out that coding standards are quite often neglected, and the exam questions remind you about them. Markus found it important that the security related knowledge was tested thoroughly.
Personally I like that an essential part of each Acquia exam is Fundamental Web Technologies where your knowledge of JavaScript and other underlying techs is tested.
Boost in professional development
I think the exam preparation provides you with a comprehensive overview. You start seeing the big picture and you can find out some details you might have missed or have not worked with before. It’s also some source of motivation to explore more, to step beyond the theory and apply the learnings in the code. So in that sense I think the certification can help you become a better developer.
Both Sebastian and Robert thought that studying for the exam was probably the most beneficial part of the certification program. You can learn entirely new things. For example, I was surprised how much the Layout API and Layout Builder were improved in Drupal 9, and how much attention the Drupal community is now paying to accessibility.
“I’ve got a deeper understanding about caching systems in Drupal. Also the comprehensive study of Drupal API in general and in-depth look at backend concepts should be beneficial,” said Robert.
Markus pointed out that sometimes you’re more influenced by your peers than by any test as you learn from the actual building of the software, not from reading a book. But in both cases you promote yourself by applying new knowledge in your projects.
So if you’re into measuring your Drupal expertise…
We all definitely recommend the certification. If you’re planning to get certified, the tips from the study guides provided by Acquia come in handy.
Basically, you have two main ways of measuring your expertise – via experience and real life projects, or via certifications. According to the guys, there are often debates over whether IT certifications have some value or not, but in this case they admit the test is useful, especially for Drupal CMS where the learning curve is quite steep. They suggest pursuing something similar for Vue or React if you’re more focused on frontend for example, as this certification is naturally mostly focused on Drupal.
“While it’s good to verify your expertise by passing the exams, you should not forget about contribution to the community which is also one way to show your knowledge,” added Markus. “If you don’t contribute that much, certification is a good way.”
The certifications themselves do not prove that you’re the most talented developer in the world, but they definitely help your career, especially if you’re just in the beginning. They’ll also help you to get noticed by big companies who pay a lot of attention to your education and certifications. Besides, it never hurts to learn something new. So, I encourage you to just go for it!
Druid’s ongoing partnership with Business College Helsinki
Discover Druid's ongoing partnership with BCH Fullstack Development Program. Learn how we support new developers through mentorship, real projects, and internship opportunities.
Kirsi Vatanen
DrupalCon Barcelona 2024: It's All About Community
Explore our experiences at DrupalCon Barcelona 2024, where the spirit of community, knowledge-sharing, and innovation were front and center.
17.05.2021
8 things we learned at Druid
Last week we said goodbye to our wonderful interns Florence and Bea. During their four month internship with us, they built a demo application with JavaScript for car repair shops and their customers – and did an absolutely brilliant job.
The app lets users choose the nearest car repair shop, book an appointment for car maintenance, keep in touch with the shop through in-app chat and see the progress stage of their car as it’s being repaired. The app was built with the user in mind, focusing on the user’s convenience.
Before Bea and Florence embarked on new adventures, we asked them to write about their learnings and experiences from the project and beyond. It turned out they had a lot to share – so let’s dive right in!
1. Progressive Web Apps (PWA)
PWAs have been in the app ecosystem for some time and you may have probably used one without knowing. They are web apps with an enhanced capability in modern web browsers.
Among many of the features we discovered are its installability, offline mode experience, linkability, native-app feel, re-engaging nature and a single codebase that can be used on different devices. It was interesting to know that companies like Uber, Trivago, AliExpress, Spotify, Starbucks and Pinterest are already using PWAs as their web service platform.
Although it was a new concept for us, a lot of research enlightened us on how we could apply it in the project we wanted to build. We built a PWA with Create React App (CRA) which was incredibly convenient because CRA has a template for building PWAs.
2. Teamwork
Team synergy was by far one of the most important factors that influenced the outcome of our project. This encompasses a lot of factors ranging from setting project expectations to good and clear communication.
We tried as much as possible to be on the same page in the building process of the project. From the technical perspective, we set up a system where we could review each other’s codes before merging changes to our main branch. That meant we had to ensure we were writing readable and understandable codes. In order to ensure uniformity in our codebase, we had rules set in ESLINT.
After working independently to solve complex problems for more than half-way into the internship, we decided to try pair coding at some point. To be honest, we wished we had started with pair coding much earlier, because we realised how knowledge sharing contributed to arriving at solutions quickly.
Home office. Luckily our internship wasn’t an entirely remote experience as we were able to work in the office as well.
3. Who needs a server these days?
You may have heard about “serverless backend”, which means running your server-side code without having to maintain your own server. The solution is tempting, because it cuts the costs (on-demand, so you don’t pay for idle time of the server), it’s easy to scale and lowers administrative overhead.
AWS Lambda is a popular choice, but managing service discovery, API gateways, and keeping your app and the functions in sync can be overwhelming – that’s where Netlify functions come to the rescue. We chose to use them in our project, because they are a make-it-easier layer over Lambda functions, which means we could use them without an AWS account; also, keeping everything up to date was a breeze.
With the serverless functions in a correctly named folder in the project, we were deploying our React frontend together with the serverless backend at the same time and Netlify did the dirty work handling all the rest. Add to that the continuous deployment straight from our GitHub repository, including previews of pull requests updates, and you got yourself a recipe for painless deployment handling!
4. Mercure for real-time features
For our app we needed real-time communication to create our in-app chat and to update the UI of the end user when the admin changes the state of the user’s appointment. We decided to use Mercure, an open protocol not using Web Sockets, but instead built on top of HTTP and SSE (Server-Sent Events).
With the Mercure Hub set up (using Docker image for local development and deployed to Druid’s server for production), we needed to do two things in our app: publish updates and subscribe to them.
Publishing happens when someone sends a new chat message or the admin changes the appointment state – a regular POST request is sent from our frontend to the serverless backend, where the data is saved to the database and Mercure update is published to the Hub.
Then the Hub’s job is to pass this update down the correct channels, so that only the users subscribed to it get the information. Subscribers are browsers; for example when an end user opens their appointment view, the app subscribes to updates about their appointment (change of stage or price estimate) and to updates of their chat (and only theirs, so that they don’t get somebody else’s messages by mistake).
To subscribe, we used EventSource, which is kind of a keep-alive connection to the server (the Mercure Hub in our case) and differs from Web Sockets in that EventSource is one-way communication, it can only listen to updates, not send any – which is all we needed.
In-app chat
5. Web push notifications
Like any new feature that was implemented without prior knowledge of how it should work, implementing push notification was mostly learning by doing. Push notifications are messages sent to user’s devices from a website via a browser. And with the offline feature of PWAs, users do not miss notifications even when they are not online.
Looking at our app’s use case, push notification capabilities were useful for notifying users whenever there was a change in information. For eCommerce and marketers it’s an amazing way to re-engage with web visitors whenever there are new products, releases, etc. It was relevant to build this feature, especially as PWAs are becoming more popular and being supported by more browsers.
6. User experience
We needed to keep the end user in mind as we worked on our app. Have you noticed the unexpected shifting of elements like videos, buttons or fonts on a web page while the page is still loading? Exactly, that can cause a poor user experience and is referred to as Cumulative Layout Shift (CLS). It’s a Google metric which is used to measure the user’s experience on a web page.
CLS is usually a good way to detect coding issues which could be resolved to improve usability on your site. These may be tiny details that could slip through during development and may seem “irrelevant”, but they definitely count! What is the point in building an app that lacks usability? Knowing about CLS and its importance highlighted user experience as an important skill to have as it makes us better developers.
Admin side
7. Scrummy Scrum
Agile methodologies have become the default way of working in the software development field, so to nobody’s surprise we used the Scrum method in our project. We learnt about this style at school, but only working on a long-term project unveils the true power of Scrum.
Thanks to the regular feedback sessions in retrospective meetings, with every iteration there were less conflicts or misunderstandings, with every sprint we worked better and more efficiently. For every ticket in the backlog we assigned points to estimate the time and effort needed to complete the task, which helped us understand deeper the goals, expectations and each other’s views.
We also made mistakes like not deploying every sprint – it came back to bite us in the last few weeks of the project as we ended up with several issues accumulated in production. Debugging and understanding which error is caused by which part of the code failing was unnecessarily complicated, so lesson learnt!
8. Teal is the new black
There’s so much more in the management philosophies landscape than the traditional, hierarchical way of big bosses, small bosses and the workers. Druid is slowly but steadily undergoing a Teal transformation, aiming at a flatter management structure.
Many tasks at the company are being taken care of by swarms – a group of people interested in the issue or topic around which the swarm was formed. The doers declare their readiness to put time and effort into the subject, the helpers can offer a little less time, and the followers are interested in the works, but for one reason or another can’t promise much help.
In our time at Druid we had a chance to observe among other things the work of the salary week swarm who took care of designing and executing salary negotiations. The best thing about swarms is they can be formed as issues or tasks arise, and torn down when they solve what they were born to do or when they become inactive and die out.
Another part of the Teal way that we found interesting is the advice process helping in decision making. When there is a decision to be made, one person volunteers to be the decision maker and asks for advice, especially from people directly affected by the decision and from experts on the topic. Others can then give advice (not their opinion, but strictly advice), but the final choice of course of action is made by the decision-maker – that also includes full responsibility for the outcome.
Other interesting stuff
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
Laurie Lim Sam
Druid’s ongoing partnership with Business College Helsinki
Discover Druid's ongoing partnership with BCH Fullstack Development Program. Learn how we support new developers through mentorship, real projects, and internship opportunities.
29.05.2020
Drupal 9 is soon here – the upgrade may be a breeze or a great undertaking
A new version of the Drupal content management system will be released on June 3rd 2020. You will have to migrate to Drupal 9 by November 2021, if you are now using Drupal 8. Drupal 7, on the other hand, will have a longer transition period until November 2022. After these deadlines, support for the earlier Drupal 7 and 8 versions will cease and security updates will no longer be provided for them. What will change with the new version 9? How much work is it to upgrade the system?
Good news first: upgrading will be over in a jiffy if you are running an up-to-date Drupal 8 system. Basically nothing will change. For example, our site here is already running on a beta version of Drupal 9, and the upgrade was done in essentially no time at all. However, every case will not be as straightforward, and especially Drupal 7 based websites will be in for quite an undertaking.
Drupal 8 upgrades easily and without risk
There’s no denying that many of Drupal’s previous major version upgrades have been quite laborious and even somewhat tricky, requiring a complete website overhaul from a technical standpoint.
But now everything is different. This time Drupal hasn’t been completely reinvented, and the version upgrade promises to be the easiest in a decade – provided that your web service is running the latest Drupal 8 version, since Drupal 9 is not that different from it.
From a technical standpoint, Drupal 9 is like the last version of Drupal 8, with deprecated code cleaned out and dependencies for third party systems updated. The migration is likely to be simple and smooth with no need for large overhauls for your website.
A basic web site upgrade to Drupal 9 will take next to no time, as long as the site is up to date and doesn’t use obsolete modules or APIs. If your site uses additional modules, it must be first checked whether they are ready to upgrade. Also custom code should be checked beforehand.
What if you are still running Drupal 7?
Long story short: this is the time you should start considering and planning a website overhaul, since you will be in for quite a big project with a deadline looming on November 28th 2022 when the support for Drupal 7 ceases.
Drupal 7 is still widely used, but updating to version 9 will inevitable be a much more complicated affair, or at least more laborious. The technology of Drupal 7 websites will have to be completely overhauled to be able to migrate to version 9, since the technological changes between versions 7 and 8 were so substantial.
The good news is, however, that in all likelihood this will be the last big migration that your web service will ever need.
This is because Drupal’s product development has shifted from a rather heavy project based model to a more modern and agile continuous development process: instead of tearing down and reintroducing the whole system every few years, new features and improvements will now be released in a faster cycle and with less upgrade effort.
Why should you upgrade to Drupal 9 now rather than later?
Feature-wise, Drupal 9 is a match for Drupal 8. Its purpose is to offer as effortless a migration from Drupal 8 as possible, with revisions done under the hood only to enable security support from November 2021 onwards. That means no hurry, right?
Well, there shouldn’t be a need to panic just yet, but we strongly advise you to upgrade as soon as possible, because, going forward, new features and improvements will be released twice a year through smaller updates. The next such update, Drupal 9.1.0, has been scheduled for release in December this year.
For example, the modernization of the admin interface is on its final stretch at the moment. It will introduce improvements to site administration and content management. When you upgrade your web service to Drupal 9 early on, you will be at the forefront of the system’s development cycle and will be able to reap the benefits of the continuous development process.
We can help you upgrade your Drupal system
We are among the top Drupal experts in Finland, and we know Drupal inside and out. Contact us – we’ll see what it will take to upgrade your web service to the new Drupal 9 version.
With our clients we have already gone through their upgrade needs on a preliminary level at the least. If this post has raised new questions or you have something on your mind, by all means get in touch with us.
Edit June 25th 2020: Drupal 7’s end-of-life has been extended until November 2022 due to COVID-19 impact on budgets and businesses. The text has been updated accordingly.
Druid’s ongoing partnership with Business College Helsinki
Discover Druid's ongoing partnership with BCH Fullstack Development Program. Learn how we support new developers through mentorship, real projects, and internship opportunities.
Kirsi Vatanen
DrupalCon Barcelona 2024: It's All About Community
Explore our experiences at DrupalCon Barcelona 2024, where the spirit of community, knowledge-sharing, and innovation were front and center.
25.11.2018
Marko Korhonen
And now for something completely different
Druid is known for being a Drupal-house. And we are. Most of the projects for us are still in the category of Drupal where a CMS is needed. But Drupal is not always the right tool for the job. Like Dries Buytaert stated in DrupalCon Vienna, “Drupal is no longer for simple sites”, and generally not for all use cases. That said, Drupal is still a very good choice for the right jobs.
We have recently had some non-Drupal projects, and now I’m going to tell about one of those. Non-Drupal meaning, however, that the tech stack wasn’t completely new to us, as we are already very familiar with the programming languages themselves (PHP and JavaScript).
The need – say no more, say no more
Our customer had a need which was not totally structured at the moment as they targeted a market which is under a major disruption. The digital landscape for this particular market is evolving, and as is often the case with evolution, you need to adapt or perish. What we already knew from the start was that the application needed to be mobile-friendly and its main function should be to facilitate communication between users. Create a backlog from that!
Mobile? So we thought PWA is the way to go. At Druid we have been fans of PWA for a while, and we have written about this exciting concept before. Progressive Web Applications are the future and show much promise. PWA is also a very good choice when you want to distribute your apps without App Store or Play Store. For example, some internal tools could be delivered from intranets. I’ll tell more about PWA later. Anyway, we thought that PWA would suit the need very well.
Communication? Wink wink nudge nudge. Say no more, say no more. This raised ideas for data structuring and for the UI.
Tech stack – And Now for Something Completely Different
We decided to go bold and try totally new components for this project: frontend, backend, database and infrastructure. Of course, there was knowledge and learning behind these things with some PoCs and such. And like I said, we were familiar with the tech. Or at least someone on the team was.
If we start from the bottom, we chose Docker for being the glue between components and between environments. We quickly drafted an open source version of this Docker thingie and released it as a separate thing. Stonehenge is a multi-project local development environment and toolset on Docker. Now the project uses Stonehenge as a developer’s tool to run it. As the local development environment is quite a common problem for developers, we thought this could be beneficial for others too. So that’s why we extracted this functionality from the project and released it as a separate tool.
Basically, Stonehenge provides us with local URLs and a proxy to handle the traffic to our projects. Proxy is made with Traefik which is just a breeze of fresh air compared to any previous tech used for the same purpose. I can say it works and performs very well on production too!
The project itself defines the services for our application. The basic stuff like Nginx, PHP, database and CLI and their relationships. We use Docker Compose for this.
For the application itself, we chose Symfony 4for the backend and React for the frontend. Basically, Symfony creates a standard JSON API which the React application then uses. One reason we chose Symfony was the support for our database (what could it be?) via Doctrine. When evaluating different backend (PHP) frameworks, we studied the experiences of other developers, and there seemed to be nothing but praise for Symfony 4. And it really was a pleasure I can say. Some of us already had experience with Symfony as Drupal 8 is built on top of Symfony 3. Still, there were many new things for us to learn.
The database. MySQL, MariaDB, PostgreSQL or something else? We ended up choosing MongoDB to complete our jump into the unknown. MongoDB is a so-called NoSQL database where data is stored as objects instead of rows. This is very useful with data that can be very different by structure (read: document might have data which the other documents of the same type do not have). Also, the schema does not need to be defined beforehand (and updated) but it just lives by how you use your document entities.
React is something which I personally cannot write very much about, however my colleague Kristian has written a small recap of its use in our project below. Beware though: the Dependency Hell in Javascript world and the vast amount of same-but-different kind of tools available might sometimes make a developer’s life not-so-easy.
Quote from Kristian:
“It was pretty great to work with React on a large-scale project. In general, it was surprisingly easy. We did make some mistakes early on, which ended up forcing us to refactor. The original plan was to render the majority of the application with Twig, and only the communication aspect would be controlled by React. Eventually, the majority of the application was ported to our React app.
This means that we initially didn’t implement any sort of routing system and we didn’t think enough about the architecture of our store. Luckily we were able to refactor with relative ease once these problems presented themselves.
Probably the most fun thing during this application for me was working with MobX. This is something I’ve wanted to do for a long time, and I’m glad I got a chance to finally use it in a commercial application. Essentially MobX is a state management tool built on the observer pattern. All observers who are watching an observable variable will magically update whenever the observable value changes. If there is one thing I’d do differently next time, I’d probably use MobX State Tree, which is a more opinionated version of MobX with some Redux-like behavior, without the overhead of Redux.”
Well, there you have it.
As we jumped on the PWA bandwagon, React helped us there to basically create a single page app which helps on some aspects on PWA. What does a PWA do, one might ask? You can think of it as an app which is basically a webpage. That means no App Store or Play Store for distribution. The app is updated when the web application is updated. There are a few distinct features to make it an “app like” experience: caching of assets for speed, standalone mode (to make your own UI without browser components), access to some mobile APIs and offline capabilities. There are also push notifications which currently work only on Android. Check the 2018 State of Progressive Web Apps for more info.
Progressive Web Applications is funny enough something Apple envisioned already in 2007 when they announced/released the original iPhone. Currently they are very strongly driven by Google. So this means basically that PWA works better on Android phones at the moment.
Fortune favors the bold, and I strongly believe the PWA is the future of apps and especially useful when creating tools for organizations and business.
Well, that was quite a long rant about technical stuff. Let’s take a breath for a moment.
“And now a film about a man with a tape recorder up his brother’s nose”
GDPR aspect
Yeah, the infamous GDPR. At this point we don’t have actual user data, so we’re nicely covered. But we designed the whole development on dummy data which started to be very fruitful during the project. We programmatically added data fixtures, which filled our database with dummy data. So in that sense, we’re totally GDPR-compliant when we do development. We don’t need production data.
This gave us some unexpected benefits too. In testing for example, we quickly noticed that we can use this generated data within our tests. How cool is that! Well, it is quite cool I can say.
“In this picture, there are 47 people. None of them can be seen.”
Next steps
The project is now in MVP state and moves to “field-testing” phase. Hopefully, end-users will like what we have done. The testing will be done with small groups and in a controlled way. This means we use a certain set of generated data suited for that test group. Also, we control the PWA aspect of the MVP. At the moment it works best on Android, so that will be the platform used in testing in testing.
“Now, what’s to be done? Tell me sir, have you confused your cat recently?”
About the coconut
We have learned a lot! This means that our learnings are already influencing our new projects and somewhat of how we handle older projects. We’re about to copy the backend stuff for a new API-only project and the Docker stuff has evolved into a very usable state. And releasing an open source project (Stonehenge) was a definite plus!
The most exciting part at least for me has been to share all this with more and more people inside Druid so that all the learnings can be put into practice on a wider scale.
“Wait a minute — supposing two swallows carried it together?”
Marko Korhonen CTO, The Ministry of Silly Walks
Author
Marko Korhonen
Platform Engineering Lead
Other interesting stuff
Kirsi Vatanen
Druid Achieves ISO/IEC 27001 Certification: Security as Part of Our Customer Promise
Druid Oy has received the prestigious ISO/IEC 27001:2022 certification, confirming its dedication to high information security standards and customer trust. Learn how this impacts our commitment to secure digital solutions.
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
08.02.2018
Working with Progressive Web Apps
At Druid we are always keeping our eyes open for new technologies, which can benefit both us and our clients. Recently we have been experimenting with Progressive Web Apps.
TL;DR
Progressive Web Apps provide handy app-like functionality for web applications
The two major requirements are the manifest.json and a service worker
PWAs can be the answer to user’s unwillingness to install apps on their phone
So without further ado, let’s delve into the world of manifests and service workers…
What are Progressive Web Apps?
PWA is a term for web apps that share certain features with mobile applications (i. e. phone apps). Essentially a PWA is a web app with several modern web capabilities designed to give users a browsing experience similar to using a mobile app.
Before we get started, it should be mentioned that some of these features will not work on all phones. As of writing this blog post, iOS does not yet fully support PWAs.
The requirements
In order to be considered by browsers to be a Progressive Web App, your app must meet a list of requirements.
The most important points are these:
Site is served over HTTPS
Pages are responsive and mobile-friendly
Site must work offline
A manifest must be provided
A service worker must be registered
Site must load fast enough for 3G
Now, most of these features that you would include anyway if you were building a web app. However, there are two important aspects, which make PWAs different than normal web apps, which I will give a brief introduction to here:
The Manifest
A manifest is a simple file which essentially tells browsers that this is a PWA. The specification can be read in details here: https://w3c.github.io/manifest/
It’s possible to define a lot of functionality in the manifest, such as a visual theme, orientation (landscape or portrait), basic configuring of colours, etc. The only mandatory fields of your manifest are name and short_name. These describe the name of your app.
The manifest must be referred from your HTML head, like so
<link rel=“manifest” href=“/manifest.json”>
Service Worker
This is kind of the body of your PWA. It is able to process all requests to and from your site from the client’s browser. Every time the user makes a request to your site, the ‘fetch’ event is triggered and you can handle that request however you wish.
A service worker runs in your browser (it does not have access to the DOM). The service worker is a “special” JavaScript file, which runs in a different scope than regular frontend JS. The browser can execute this script without the page being open. It can even be executed while the browser is closed. This is extremely handy for several app-like features, such as push notifications. In this article we will only scratch the basics, so we will not really be using much of this advanced functionality.
Unlike frontend JavaScript, the service worker needs to be registered to the browser navigator.
I’ve included the script app.js in my HTML and from there the following code is executed:
if ('serviceWorker' in navigator) {
navigator.serviceWorker
.register('/sw.js')
.then((registration) => {
console.log('Service worker registered', registration);
})
.catch((error) => {
console.error('Something went wrong with registering service worker', error);
});
}
Then, in the ServiceWorker, you can add eventListeners.The most important events are install and fetch.
install
This event runs the first time the user visits your site. Since it will only run once, it is expected that page load might take slightly longer than normal during this request.
The previous code defined the name of the cache we will be using and an array of URLs that should be cached immediately. Your cache will be created if it does not exist. You can add any internal URL here. Just keep in mind that the ServiceWorker has a certain scope. The scope is equivalent to its URL. So for example in the previous code, the service worker is located at /sw.js, which means that its scope spans the entire site. If the ServiceWorker was located at /swdir/sw.js, It would only be able to handle requests within the /swdir/ url.
fetch
Now comes the exciting part – fetch. As I mentioned earlier, this event is triggered every time (except the first) a user makes a request to your site.
self.addEventListener('fetch', (event) => {
console.log('Fetching data for', event.request.url);
event.respondWith(
caches.match(event.request).then((response) => {
if (response) {
console.log('Returning ' + event.request.url + ' from cache');
return response;
} else {
console.log('Fetching ' + event.request.url + ' from network');
return fetch(event.request);
// TODO Add fetched file to cache
}
}).catch((error) => {
// TODO Handle error
});
);
});
Remember, fetch is triggered every time a request is made for an individual file on your server. So for example, if the user visits index.html, it’s possible that a fetch event will be triggered individually for the urls index.html, /js/script.js, /css/style.css and /images/myCat.png. So in that case it will run 4 times.
The previous code is a simple way of serving offline content. For each request, the ServiceWorker checks if the file already exists in cache. If it does, it is served to the user. Otherwise, the file is fetched from the server.
With this simple code, it is possible to have an offline PWA up and running.
Some advice
While the concept of PWA isn’t extremely daunting or anything, there are definitely certain aspects that can be confusing and frustrating. Listed below are some of the more useful tools I used when learning the basics of PWA:
Lighthouse is a built-in tool for Google Chrome designed specifically for testing Progressive Web Apps. Just go to your page, open Chrome DevTools and press the tab “Audits”. Lastly, press the blue button called “Perform an audit…”. Chrome will then run all the tests and give you the results in an easy-to-digest list. In case of failed audits, the reason for failure should be pretty self-explanatory. If you’re like me and like the shotgun approach, just keep modifying your code and re-run Lighthouse until it’s all green. Do keep in mind that your site needs to be served in HTTPS, so in case you’re testing on localhost, this audit will invariably fail.
Google has some very good and up-to-date resources on PWA. One nice thing about their docs is that it will automatically warn you if the article is old and therefore more likely to be deprecated.
Conclusion
Progressive Web Apps are a new exciting technology and it is definitely worth it to invest some resources into learning this technology. In an age where users no longer want to install apps on their mobile devices, PWAs serve as a nice middleground between the modern web applications and more traditional mobile applications.
Other interesting stuff
Kirsi Vatanen
Druid Achieves ISO/IEC 27001 Certification: Security as Part of Our Customer Promise
Druid Oy has received the prestigious ISO/IEC 27001:2022 certification, confirming its dedication to high information security standards and customer trust. Learn how this impacts our commitment to secure digital solutions.
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
11.10.2017
Design process matters – here are three reasons why
Have you ever paid attention to how your web service is being designed? How do the design and the technical implementation mesh together? I can promise that it’s not a given that they do. In this industry it is surprisingly common to see clumsy and unclear design conventions in use to the benefit of neither the client nor the service provider.
A good design process helps you create a consistent user experience throughout the web service, and tackles many challenges that project management faces. It also saves time and money. That’s why we here at Druid have made our design process a priority in the last couple of years.
“What should a design process be like?” you ask. Agile, efficient, and optimized – a process that can achieve the following three powerful advantages:
1. Scalability
A good design process makes it possible to create scalable design work. What scalability basically means is that instead of designing separate web pages, you design the components that web pages are built with.
What’s the use of scalability? For one, the user experience of your web design will be consistent when the same components repeat throughout the service. A seamless and effortless user experience is a crucial factor in the digital world when you try to turn users into customers. Secondly, scalable design saves time and energy (and therefore costs), because you don’t have to design everything separately from scratch. This means more results with less trouble.
What about when you want to update the styles of your site? That is where a style guide comes into play. Style guides allow you to make site-wide style updates and to easily modify the styles of your components. And since all the changes are updated into all of the components at the same time, the consistency of the site’s styles can easily be maintained.
2. Cost-effectiveness
A good design process ensures that your service is being built cost-effectively. Just the scalability of the design work alone creates considerable savings. Another aspect that factors in on the costs is agility of the design work and its synchronization with the technical implementation of the project.
When the design and the implementation of a project move ahead concurrently in cycles, possible issues and solutions can be considered ahead of time and from several points of view. When using the right tools, design solutions can be tested and iterated at an early stage. This way you can avoid many potentially burdensome and laborious modifications at a later stage.
If, however, you’re using a straight production pipeline where the design and implementation follow each other chronologically, you can easily find yourself in a situation, where you’ve used too heavy a toolset prematurely and spent money designing something “too” fancy before the client’s actual needs are even known and understood.
We tend to create lightweight prototypes of the components we design before creating them for the actual system. This way you, as our client, get to give us your input at an early stage – before modifications and changes are laborious and expensive.
3. Better basis for success
A good design process provides excellent tools for managing the expectations of the customer and helps the customer be a part of the process from the get-go.
When the design process is on point and clearly communicated, it helps the customer to better understand the roles of both parties and the division of work between them. The client will also know when to give input and what kind of input is expected from them. When the client is an integral part of the team, unfortunate surprises can be avoided by both parties, and it becomes more transparent how fast things can realistically be achieved.
At Druid, every project starts with a design kickoff. During the kickoff we and the client go through how we carry out our projects. We agree the goals and the metrics with which we measure success. And as is fitting with the agile methodology, the goals and metrics may refocus or reform as the project progresses; the further the project develops, the more we learn and the wiser we become.
As you can see, flexible and clearly defined design process can provide considerable advantages. How do we handle this in practice? Contact us and we’ll tell you more!
Author
Other interesting stuff
Kirsi Vatanen
Druid Achieves ISO/IEC 27001 Certification: Security as Part of Our Customer Promise
Druid Oy has received the prestigious ISO/IEC 27001:2022 certification, confirming its dedication to high information security standards and customer trust. Learn how this impacts our commitment to secure digital solutions.
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
20.09.2017
Samuli Aalto-Setälä
Making super fast virtual machines with passthrough
Reader beware: this text is highly technical! It’s meant for fellow developers and tech enthusiasts. We’ll take a look at using virtualization as a no-compromise replacement for dual booting between operating systems, emphasis on the word no-compromise. Many tasks are quite feasible even with a basic VM, for example testing websites in a legacy browser not available on current operating systems. For more demanding tasks, booting to another natively running OS and then back just for one specific app or a gaming break is cumbersome. So what can be done?
Getting up to speed
Emulating a CPU, graphics and I/O has a heavy performance cost. We can get away with it when running old applications on modern hardware (think of DOSBox and video game console emulators). We need something faster than emulation for a snappy VM running new operating systems and apps. To do this, the virtual machine monitor has to bring the guest system closer to bare metal. Hardware assisted virtualization has been on server and consumer CPUs for years (as Intel VT-x and AMD-V). It allows executing guest instructions on the real CPU with far less overhead than in emulation. It is crucial for running x86 virtual machines at nearly native performance. It’s also widely supported nowadays.
On the I/O side of things, various paravirtualization methods have been used to boost performance compared to emulation. What this means is the virtual machine manager exposes special devices that allow somewhat direct access to actual host hardware through an API. This provides faster disk, networking and timing support in virtualized environments. Even limited hardware accelerated graphics support exists in various virtual machine managers, where OpenGL and Direct3D up to version 9 via an API translation similar to Wine are supported. Paravirtualized devices often need specific device drivers that have to be installed on the guest OS.
Another step further is to provide a guest system with dedicated, exclusive access to actual devices on the host machine. We call this method PCI passthrough. Since the guest has direct access to the device, it is controlled by the same device drivers and has the potential to provide the same performance and device specific functionality as on bare metal. For instance, TRIM commands can be sent directly to an SSD connected to a passed through disk controller. Dedicated storage and networking hardware can be useful for demanding server use cases where the best performance is needed without giving up the benefits of virtualization. On the desktop, an interesting use case is dedicating a graphics card (VGA passthrough) to a guest OS and running graphics-intensive applications with high performance.
The software side
Support for PCI passthrough exists in various virtualization software. However for VGA passthrough specifically the common and well documented approach is to run a VM using the Linux kernel’s KVM as its hypervisor, QEMU as the userspace emulator and OVMF as its UEFI component. Trying out QEMU is relatively straightforward. Virtual machines can be fired up from the command line and all the needed configuration options are given as arguments. Then, host devices can be given to a VM using a helper driver called vfio-pci.
If all goes well, you’ll have a VM with direct access to the piece of hardware, with minimal overhead. Pretty much any PCI-E device can be passed through, with caveats (we’ll get back to these in a moment). Many motherboards have their SATA and USB ports spread out on more than one controller. Then, one of them could be dedicated to a VM. My own VM setup has its own graphics card, USB controller (mouse and keyboard can be switched between the host and guest with a switch), an add-on SATA controller and the onboard audio passed through. After some tinkering, optimization and figuring out what works best, I’ve given up on dual booting because the VM is simply free of compromises.
The fine print
Let’s look at the caveats, then. Doing this needs obviously appropriate hardware. Most importantly both the CPU and the motherboard need IOMMU virtualization support (Intel VT-d and AMD-Vi). Luckily, they have been available in many if not most consumer platforms for years. However, a working implementation on a motherboard is not a given even if VT-d or AMD-Vi support is advertised. Though in many cases fixes have been provided via BIOS/UEFI updates, virtualization features are hardly a priority on mainstream consumer hardware.
Another common issue is something known is IOMMU grouping which deals with the separation between devices. To put it simply, devices cannot be passed through if other devices belong to the same group, unless you pass all of them. Otherwise they could interfere with each other and nasty things could happen. How your onboard devices and add-on card slots are grouped depends on the motherboard and the chipset it’s based on.
Finally, the PCI-E devices themselves can have firmware bugs that cause them to behave badly when passed to a VM. Some hardware vendors are also known to implement VM detection in their (consumer hardware) drivers and prevent them from working if they are running inside one. With server grade, and to some extent enthusiast consumer hardware you might avoid these issues. Still, it doesn’t hurt to do some research on your prospective components before building a setup like this.
All in all, we have just scratched the surface on this matter and this is only intended to provide an introduction. If you’re interested in this kind of thing, I recommend checking out the links below for more details. You might also ask, what’s the point? Considering the amount of time spent tinkering and possibly the cost of additional extension cards for VMs, one could just buy a whole extra machine and be done with it. But where’s the challenge and fun in that? 😉
Passthrough and virtualization in general will quite probably remain in the niche for regular desktop/laptop users but time will tell what happens in the business and server world.
Druid Achieves ISO/IEC 27001 Certification: Security as Part of Our Customer Promise
Druid Oy has received the prestigious ISO/IEC 27001:2022 certification, confirming its dedication to high information security standards and customer trust. Learn how this impacts our commitment to secure digital solutions.
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
07.08.2017
A static content layout lifesaver: Paragraphs module
As Drupal website users, we all know how boring static basic pages and detail pages are. As Drupal developers, we all know how frustrating those boring basic pages and detail pages become, when someone (indeed, the content editor) tries to change the layout by messing with the source code of the WYSIWYG.
Frustrate no more. The Paragraphs module is here to save the day (and your content layout)!
What is it, this Paragraphs module?
Imagine you’re a content editor and you want to create an article with some text, a few images, maybe a caption underneath the image. You also would like to have a quote, which stands out with a full width background image. How would you do that with the WYSIWYG? You try to modify the HTML in the source code and hope it turns out exactly like you had in mind.
Of course, that’s wishful thinking. But there is a solution. One that doesn’t require a big WYSIWYG field, but a solution that lets the content editor create every section of the article separately.
Curious how it works? Let’s dive right into it, shall we.
** For examples I am referring to a paragraph demo site I recently created for Druid. **
The Paragraphs module simply creates a new field type, Paragraph, which you can use in your content type. And for each type, you can add field types. Sounds familiar? Well… it kind of works the same way as a content type.
Paragraph fields
Now you’re probably thinking, “haha, what’s so life-saving about this” and you have a point. The field type is nothing life-saving at all, but the paragraph types behind it are.
Paragraph types
A paragraph type consists of fields, and these fields are the same types as those of a content type (you even get an extra reference option to another paragraph).
Paragraph types
After you’ve created your paragraph types, you can add them to your content type – and this is the nice thing about it. Basically, you create a field with a reference to paragraph types. You can define no paragraph types, which will let you use them all, or you can define only those you would like to use in a specific content type. When you create a new node you can use as many paragraphs as desired, and you can mix them as much as you like. There’s also a handy little drag and drop so you can change the order of the paragraphs really quickly and easily.
Paragraph – Add contentParagraph – Added content
Each paragraph type has a matching template file, where you can add the needed HTML markup. This allows you to theme per paragraph and that way, you don’t have to worry about the layout of your node. It doesn’t matter if you add a certain paragraph type to the top of the content or the bottom; if the theming has been done properly, it will look good.
Paragraph frontend
Looking pretty good, right? Try that with one huge WYSIWYG body field. Yup, didn’t think so!
Why you should only use it for static content
The purpose of this module is to create nicer and cleaner content. As in content created by the client. Overview pages, feeds, or any other item that needs dynamically generated content, should probably be handled by something that is able to do this automatically, like views.
If you want to give the content editor a bit more freedom layout-wise, but at the same time you don’t want to worry about your pretty layout getting messed up, the Paragraphs module is exactly what you need. Dying to try it out? You can find the module here. I truly hope you enjoy using it as much as I do!
Other interesting stuff
Kirsi Vatanen
Druid Achieves ISO/IEC 27001 Certification: Security as Part of Our Customer Promise
Druid Oy has received the prestigious ISO/IEC 27001:2022 certification, confirming its dedication to high information security standards and customer trust. Learn how this impacts our commitment to secure digital solutions.
Laurie Lim Sam
Hacktoberfest with Mautic
Laurie joined Hacktoberfest 2024 to contribute to Mautic. Discover how she reviewed pull requests, translated documentation, and implemented new features while giving back to the open-source community.
We use cookies on our website to ensure the best possible user experience, analyse site usage, and assist in our marketing efforts.
By clicking “Accept All”, you give your consent for all these purposes. You can also visit "Cookie Settings" to specify the purposes you consent to.
You may change your consent at any time by clicking the "Cookie consent" tab at the bottom right corner of the website.Read more about cookies
This website uses cookies to improve your experience while you navigate through the website. The cookies that are categorized as necessary are stored in your browser as they are essential for the basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your explicit consent. Opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Cookie
Duration
Description
_GRECAPTCHA
This cookie is set by Google reCAPTCHA, which protects our site against spam enquiries on contact forms.
cli_user_preference
cookielawinfo-checkbox-advertisement
1 year
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement".
cookielawinfo-checkbox-analytics
1 year
This cookies is set by GDPR Cookie Consent WordPress Plugin. The cookie is used to remember the user consent for the cookies under the category "Analytics".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
CookieLawInfoConsent
pll_language
1 year
This cookie is set by Polylang plugin for WordPress powered websites. The cookie stores the language code of the last browsed page.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Marketing cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
_fbp
3 months
This cookie is set by Facebook to deliver advertisement when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
_lfa
2 years
This cookie is set by the provider Leadfeeder. This cookie is used for identifying the IP address of devices visiting the website. The cookie collects information such as IP addresses, time spent on website and page requests for the visits.This collected information is used for retargeting of multiple users routing from the same IP address.
AnalyticsSyncHistory
1 month
No description
bcookie
2 years
This cookie is set by linkedIn. The purpose of the cookie is to enable LinkedIn functionalities on the page.
bscookie
2 years
This cookie is a browser ID cookie set by Linked share Buttons and ad tags.
lang
session
This cookie is used to store the language preferences of a user to serve up content in that stored language the next time user visit the website.
lidc
1 day
This cookie is set by LinkedIn and used for routing.
UserMatchHistory
1 month
Linkedin - Used to track visitors on multiple websites, in order to present relevant advertisement based on the visitor's preferences.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
_hjAbsoluteSessionInProgress
30 minutes
No description
_hjFirstSeen
30 minutes
This is set by Hotjar to identify a new user’s first session. It stores a true/false value, indicating whether this was the first time Hotjar saw this user. It is used by Recording filters to identify new user sessions.
_hjid
1 year
This cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.