Thank you 2019

A new year always sparks conversations about what we’ve done, who we are, and who we’d like to become. In the day-to-day messiness, it’s often easy to forget all the things you’ve accomplished, so Christmas seems like a natural time to step back and reflect.

In this blog post, we’ve rounded up the top 9 things we achieved during our incredible 2019, and some of what we’re planning to help us start the new decade off right.

 

  1. TRIPLED OUR CUSTOMERS

When Bea and I started Share PLM 2 years ago, we both wanted to “feel” the results of our work.

In 2019, we were lucky to work with a bunch of new and amazing customers! We’ve had so much fun working with companies like Aqseptance, Eurostep, Konecranes, Methode Electronics, Technia and Quick Release.

This year, we finally felt we were taking off. And we’re looking forward to what’s ahead – and excited to continue growing and learning with new customers.

 

  1. DOUBLED OUR TEAM

Our team includes more than 8 people right now, which means we’ve more than doubled our team’s size this year.

Myriam and Nathalie have joined us as PLM Consultants, and we’ve lately welcomed Heather as our copywriter. This year, we also welcomed the very first man at Share PLM: Mario, our web and media designer. He’s done a great job at becoming a part of our “woman-centric” company.

Doubling our team has been super exciting, but it’s also been challenging. In a knowledge-intensive business like ours, it takes time, care, and money to build the capabilities your people need. Especially when you don’t have good enough processes.

 

  1. DEFINED AND DOCUMENTED OUR PROCESSES

This year, we set out on a mission to define and document who we are and how we do things with processes and a clarified company structure. Processes are all-important as your team grows. They are the groundwork that allow the team to grow, and they bring in transparency.

To visualize our way of working, we put together an intranet that lays out our core processes: Product, Sales, Delivery, Operations and Human Resources.

We’ve gone the extra mile to keep our processes lean and document key tasks in a simple way. We (still) don’t have swim lanes and complex process management software, as our customers do. But it feels like we’ve become more organized, disciplined, and focused.

 

  1. STREAMLINED OUR WAY OF WORKING WITH NEW TOOLS

When it comes to running a small business, having the right toolset can make a huge difference. This year we’ve been shaping our toolset to communicate better, organize ourselves, and increase transparency.

We’ve expanded the use of Asana, the powerful project management software we use to keep our projects on track. It helps us visualize what tasks are on our plate, what we’re currently working on, and what’s been completed.

We’ve connected Asana to Toggl, a time-tracking tool, to monitor where our time goes. The integration is powerful, and now we can extract insightful reports and calculate how much time we spend on our projects.

To collaborate in real time, we continue to use Google Suite, a package of cloud-based productivity services. Google Suite allows us to email, chat, meet online and store and share files. We also use some of their apps, including Google Docs, Sheets, Slides, and Sites.

We also introduced Loom to shoot and share quick videos, Hootsuite  to schedule and manage our social media, and Active campaign for marketing automation, among other apps.

 

  1. MET IN PERSON AT OUR TEAM EVENTS

Technology has made it easier to interact with people across great distances and time zones, and there are a lot of benefits that accrue to us as a distributed team.

At the same time, it means that if we don’t arrange live face-to-face workshops, we might never meet each other in person. For example, Myriam and the team worked together for several months without meeting in person.

There’s no substitute for meeting someone face-to-face. Working remotely actually makes in-person interactions even more valuable.

This year, we hosted 3 team retreats in Frankfurt, Hamburg and Madrid.

We try to organize workshops frequently to improve our way of working, share ideas, and discuss feedback. Over dinner, we learn what makes each other tick, or share our true passions with a good red wine.

  1. ATTENDED AND SPONSORED PLM CONFERENCES

Industry conferences are great for learning and sharpening your skills, networking with peers and potential customers, and taking a break from the day-to-day work routine.

I always return from conferences with a lot of new ideas, contacts, and inspiration.

This year, we sponsored PI PLMx London and PDT Europe, and attended several smaller PLM events and user groups.

While people can read about our products on our website, I love to chat in person with potential customers, ask questions specific to their businesses, and show hands-on examples of our eLearning courses at our sponsor stands.

 

  1. STRENGTHENED OUR ONLINE PRESENCE

In 2019, we continued to build a consistent digital presence through our blog and social media.

Creating interesting and valuable content is an effective way to attract potential customers. It allows you to share your expertise, knowledge, identity and experience with your followers.

Content marketing works for us at #SharePLM, and some of our best customers have reached out to us after reading our articles or following us on social media.

However, creating content regularly is hard. It’s easy to get side-tracked by the client work, and we often don’t find time in our days to write. There’s always so much else you could be doing when it comes to content!

 

  1. LEARN ONLINE MARKETING AND SEO

When I started Share PLM a little more than 2 years ago, I knew nothing about online marketing and SEO. While we’ve seen results from our online presence, SEO is still an important missing piece of the puzzle. SEO is powerful because you can get heaps of free, good-quality organic traffic.

In 2019, we learned a lot more about online marketing and SEO. We’ve enrolled a program at #latransformateka, a Spanish online business accelerator, to better understand SEO and content marketing. We’re excited to start implementing some of these learnings and key SEO principles in 2020!

 

  1. STARTED TO DEFINE A SALES SYSTEM

This year we’ve been experimenting with several sales methods to define our sales system. A sales system is a predictable, repeatable, reliable, and sustainable way to generate and manage sales. We’ve tried both online and offline methods. We’ve tried paid advertising. We’ve even stepped outside of our comfort zone and tried cold-calling and cold emailing. We still don’t have a perfectly crafted sales system, but we’re on our way. In 2020 we want to focus on what works for us and document our selling process in a CRM system.

SAY HELLO TO 2020

This year, instead of listing a vague mishmash of broad ambitions and aspirational targets, we’d like to choose one word for 2020. One word that we can focus on this year, one word that embodies all our intentions and goals for 2020. One word that steers us and keeps us focused, like the North Star.

For 2020, our One Word is…

FOCUS!

In 2020, we want to do less, but do it better.

The Pareto Law holds true for us at Share PLM in many areas. For example, if we look at our projects, 80% of our income comes from 20% of our clients. And if we take sales, 80% of our new leads came from 20% of our sales initiatives.

We want to free up time and energy for the few things that matter most to us, instead of spreading ourselves too thin by taking on too many projects.

We want to focus on the 20% of the work that leads to 80% of the results.

 FOCUS ON OUR CORE BUSINESS

More specifically, we’re looking forward to focusing on online PLM education and training. In 2020, we want to be more intentional about the projects we accept and prioritize eLearning and training.

FOCUS ON CONTENT

Besides that, we want to further strengthen our online presence with a focus on content. We want to create great content on a regular basis. Content that our customers not only use every day to manage their work, but also love.

We’re planning to tidy up our website, improve its design, and simplify our product portfolio.

FOCUS ON SPAIN AND LATIN AMERICA

One of our targets for 2020 is to land our first client in Spain or Latin America. Although we’re based in Spain, we still haven’t worked with any customers in our territory. To achieve that, we’re planning to launch a Spanish-language version of our website and blog, and organize a series of small events in our country.

FOCUS ON INTERNAL TRAINING

Finally, in 2020 we want to invest more in internal training. We’ve realized how difficult it is for someone who hasn’t had previous experience with PLM to understand exactly what PLM is.

That’s why we plan to prepare a toolbox for “To-Be” PLM Consultants, so it will be easier to onboard new talent and show them the steps to success. Going from being a “doer” to a “teacher” and coming out of the zone won’t be easy. We’ll need to reverse-engineer our flow, turn it into step-by-step instructions, and make it understandable.

Part of the toolbox will be an online PLM course. We want to keep the course practical and let people play with cloud CAD and PLM tools and work on real case studies, so they can get a taste of what it’s like to be a PLM Consultant!

We’d also like to sell this course and experiment with our first business-to-consumer course.

GRATEFUL FOR YOUR SUPPORT

As 2019 comes to a close, we want to express how grateful we are for the support you’ve shown us. We’re just starting our journey, and most of you have helped us a lot to improve and continue growing by sharing our content and referring people to us.

Here’s to an amazing 2019, and an even better 2020!

Make Lifecycle the Star That Guides Your PLM Vision

ONE WORD for your PLM Vision: Lifecycle

Last week, I went to observe a start-up pitch competition. During the event, start-ups get to pitch to multiple investors in order to attract potential investors, partnerships, or exposure.

  • What’s your one word?

That’s the first question the panel asked each participant. The challenge was simple: lose your product’s long list of features and benefits, and instead pick ONE WORD.

Most participants couldn’t come up with their one word quickly. Their single words didn’t capture the essence of their products.

The few start-ups that could pitch their one-word confidently were the ones that won.

Creating a one-word vision helps you drill down to what’s most important. It helps people learn and remember your core message.

So let me ask you the same question:  If you had to boil down your PLM vision into one word, what would it be?

If you’re like most companies, your answer will contain a list full of business jargon.

Unfortunately, most business jargon is vague in meaning, not everyone understands what you’re talking about, and it tends not to work as effective communication.

Make LIFECYCLE your ONE-WORD

Look at the vision board of winning PLM programs, and chances are you’ll see the same ONE WORD: “LIFECYCLE.”

The commitment to focus on lifecycle value runs deep throughout the most successful PLM implementations I’ve seen.

Lifecycle-centric PLM has huge advantages. PLM isn’t only for engineering anymore—the onus is on every team member in the entire company to capture the product’s lifecycle value. From product to sales, engineering, purchasing and service, everyone needs to think about how to design, make, ship, install, operate and maintain the product. There’s a long lifecycle, with lots of monetization possibilities to exploit including spare parts, upgrades, maintenance and smart connected services, among others.

By paying attention to and valuing LIFECYCLE, you can create a digital thread that keeps you continuously informed about business opportunities, and you can improve our products using the feedback from the site.

Let’s take a look at five essential steps to shift your PLM vision toward lifecycle.

1) Let people build on the vision

Spoon-feeding people with facts about the benefits of a lifecycle mindset won’t get people to care about it. Rather than force-feeding facts, elicit interest by inviting them to pose questions:

“What questions would you be able to answer if you had access to your product’s digital thread?”

For example, for an industrial equipment company, the main teams could come up with these questions:

  • Product Management: What is our standard product? How is our product structured and designed? What components are available to build the product? What are the customer requirements for the product?
  • Sales: What variants and options are available? What is our offered variant for this customer? What are the customer-specific requirements? What services can we offer?
  • Delivery: What products will we deliver to the customer? What variants should we use? What do we need to manufacture, and what do we need to buy? What components can we use for this customer?
  • Services: What spare and wear parts should we offer? What services do we need to deliver to the customer? What is the installed base of this customer? What feedback are we getting on the product?

You need ways to help people test the vision for themselves. Thinking about the end result is an invaluable resource for building a lifecycle-centric PLM vision. Without realizing it, people will be pointing out things they could know, but don’t know currently with the current information they have at hand.

2) Clarify the end-to-end process

How do you make your end-to-end process clear? You must explain it in terms of human actions. This is where so many process documentation initiatives go awry. Boring process charts and endless swim lanes are certainly useful, but they are often too abstract to the point of being meaningless.

We are wired to feel things about people, not for abstractions. With that in mind, gather every team working with one of your products in a single room and organize a PLM process walkthrough. The idea of a process walkthrough is to break down the process to give every team a basic idea of how it all works together.

From keeping people in the loop on what’s happening, to building relationships with various stakeholders, to negotiating the terms of collaboration—product teams can benefit greatly from a process walkthrough.

After the walkthrough, people in your company will understand what, exactly, their coworkers’ jobs are and why they matter. Not only do process walkthroughs clarify the overall process, they also create an understanding of what works, what’s missing, and what should change.

I’ve always found process walkthroughs incredibly valuable to opening the door for collaboration between and among teams. It turns out that when people see the work that’s being done by other teams, they start valuing it more. 

3) Reduce clumsy handoffs between teams

The actions taken by each team at your company affects each of the other teams. Your product definition affects how quickly prospects move through your sales process. Your sales teams affect how easy it will be for engineering to deliver your products on time. And of course, your support and service activities impact whether your customers become promoters—people who recommend you to their peers—or warn their networks to stay away.

Listening to other teams’ needs in real words is an important reminder that there are people on the other side of the wall. Understanding what other teams need is the first step toward providing the information you need to answer these questions, not because “it’s your job,” but because their efforts will end up creating more lifecycle value in the end.

To put lifecycle at the heart of your business, you need to invest in transparent, easy-to-understand and actionable handovers.

Start by mapping the deliverables each team expects from the others. Ensure that every team understands what they need from their coworkers, who they’ll be handing things off to, and how they can provide further assistance even after the handoff has occurred.

Removing friction from your internal handovers means you can free up your product lifecycle, and create value faster.

4) Focus on information flow

There are various reasons why wrong information can creep into your product lifecycle: unclear instruction flows, cranky integrations, unreliable data, lack of collaboration among team members—and the list goes on.

Product data is spread across multiple systems and teams throughout the lifecycle. It’s often difficult to know where it comes from, where it’ll be sent to, or who owns it. It’s often complex to visualize how things are connected and understand the big picture.

To make matters worse, each team’s work probably happens within more than one system that names things differently—that is, the same information is stored in more than one system, but with different attributes.

 “What does this attribute mean?” “Why is it named differently in the CAD system?” “What system is the owner of the information?” “Why can’t I modify this information?” “Where do these values come from?” “Why is this numbering code different?”

What can you do about this?

Start by giving people an actual window into the company’s information flows. A visual representation of your company’s key product information flows can help people gain both high-level and granular visibility of how information flows between your core systems. It can help teams understand exactly what information is important for the downstream processes.

An information flow map serves as a guide that helps people visualize how relevant data flows through core systems and how the attributes that carry that information are named in each system.

Documenting information flows is only the first step towards making information flow more fluent. It’s a laborious task, and you’ll probably spot several places where information just “doesn’t flow” and others where lots of manual work is needed. 

5) Lifecycle ownership and accountability

The commitment to focus on lifecycle value must run deep throughout the entire fabric of your organization. When all of your teams are aligned around the product lifecycle, you can provide a more holistic and delightful experience to anyone who interacts with your company’s products.

Transparency in the workflow will enable people to be aware of each other’s roles and responsibilities and how they complement their own.

Instill a sense of accountability all around, and people will carry it with them during their day-to-day work.

There are no shortcuts to a lifecycle mindset

There are no shortcuts to attaining a lifecycle-centric culture. Working out a bold lifecycle-centric vision and getting every team to talk to each other is the first step. Integrated workflows and a free flow of information give everyone a better appreciation of how other teams and departments are affected by their actions. The better your teams can communicate with one another, the easier it will be for them to share ownership of the lifecycle value.

The goal should be for all members of your organization to consider themselves “guardians of the lifecycle value” in one way or another. That is, all teams should be laser-focused on doing their part to ensure that your company’s products have a well-defined digital thread.

A lifecycle mindset isn’t a box to be checked off. It’s a core value that requires a company-wide commitment to lifecycle value in order to get it right. If you want to win at PLM, make lifecycle the star that guides your company culture.

Join our FREE-10 day 
PLM internal marketing email course.

Discover how internal marketing can boost your PLM initiatives and the exact steps to create an internal marketing campaign.

A Guide to Cloud PLM—Cloud-Hosted Vs Cloud-Based

NOT ALL CLOUDS ARE CREATED EQUAL

Who doesn’t use some cloud applications nowadays? Gmail, Slack, Salesforce, … according to a recent CIMdata survey, almost 80% of manufacturers already use cloud-based services. Although ERP and CRM are the most common cloud enterprise solutions, cloud PLM is starting to become more prevalent.

Today, most PLM vendors have jumped on the bandwagon and are offering a cloud PLM. They have all recognized the undeniable advantages of cloud-based solutions—the lower cost, scalability, simplified implementation, cost savings by removing IT expenses, and a faster path to ROI.

So if you are considering adopting a cloud PLM solution, you should be aware of the different types of cloud terminologies that are out there.

Why is this important? Well, although all “cloud” solutions are available on the internet and offer a subscription model, there are important differences between them. Understanding what they are can help you make the best purchasing decision based on your business’s needs and budget.

There are two main differences in the cloud PLM landscape you need to understand: cloud-based PLM versus cloud-hosted PLM.

What is a cloud-hosted PLM?

Basically, a cloud-hosted PLM is a pre-existing software application that is hosted in a cloud data centre, managed by the vendor on behalf of the customer. The vendor provides its infrastructure as a service (IaaS).

This model of cloud follows a single tenancy architecture. This means that a single instance of the software and supporting infrastructure serves a single client. Each client has his or her own independent database and instance of the software.

What is a cloud-based PLM?

A cloud-based solution means that the vendors have built the PLM application directly in the cloud, starting from scratch, using a Software as a Service (SaaS) approach.  Models, servers, databases and code that are part of the software are also managed and hosted by the software vendors.

As per the cloud-hosted model, servers, databases and code that are part of the software are hosted by the software vendors. In this model, each client shares the software application, a single database and the servers. However, the data of each client is isolated and kept invisible to other clients.

An analogy often used to explain the difference between single-tenant and multi-tenant architecture is that of renting your own house on a street that is shared by other private houses versus renting an apartment in a building where other apartments are rented by other people. In the latter, tenants can share maintenance cost, which makes it more affordable.

If you want to know more about the single-tenant versus multi-tenant difference, you can check this article.

So what does it all mean for your company? In the table below, we summarize how choosing one model over the other will impact your organization.

What’s important to you?

There are benefits and drawbacks to both single-tenant and multi-tenant systems. Ultimately, your company must decide what is most important to their business and what can be sacrificed.

Choosing your ideal solution will depend on a variety of factors: Is cost the primary driver? Is security critical for the type of data you are storing? Does your PLM solution need a lot of customization? Are you happy with using a one-size-fits-all system? Does your industry or country have unique regulatory constraints?

If you want to know which cloud PLM solution has a single-tenant or multi-tenant architecture, stay tuned. We’ll soon publish an article to dive into cloud PLM solutions.

Making sense of your product data.

Subscribe now and get a complet infographic for free.

By subscribing you agree to our terms

8 Things We’ve Learned From Working Remotely

Working remotely can pose a challenge for any organization. Even more so if (as in our case) the business was originally started this way, with no prior experience of the traditional working environment. Every member of our team lives and works in a different city, and any new staff receive their initial training online. Working remotely is a challenge, but there’s no doubt that if you can get it to work correctly, your productivity can go through the roof.

So how have we managed to turn an apparent stumbling block into an advantage?

In this blog post, we will explain how we work remotely in our organization and how this situation has enabled us to create a clearly-defined working model.

The Era of Remote Work

In our hyperconnected world of today, more and more businesses are beginning to recognize the benefits of integrating a remote working model into their workforce. This fact is even more relevant if you work in the PLM sector, where finding talent is a daily challenge. Occasionally you find the perfect profile only to discover that they live thousands of kilometres away from your office. In this case working remotely is the only option. Likewise, it is common practice to outsource developer services to countries such as India or Eastern Europe or work with clients and software providers that are far from our borders. These issues make the option of remote work a much closer reality in our sector compared to others.

By following this route, we are faced with the usual dilemma. The comfort of working from home comes with the inconvenience of not having all the usual office tools on hand. Not to mention the perception that still exists in many companies that if they allow their staff to work from home, it will lead to reduced production output. Businesses often fear that this concession of workforce freedom will result in a loss of efficiency and therefore money. However, the reality is very different. It’s a proven fact that working remotely not only positively affects the human resources of a company but also its accounts.

Why work remotely?

Working from home is not just sitting in your pyjamas and answering emails. Nor is it just being able to manage your rest breaks in the way that best suits your lifestyle while binging on your favourite snacks before going to pick up your Amazon packages. It’s much more than that! Working remotely transforms people’s lives and makes them the masters of their own time. In our case, there is total flexibility when it comes to allocating work hours, as long as they add up to 40 hours a week. If for example, you’re an early bird then just get up early. If you feel more inspired working in the middle of the night, you can. Of course, we must always respect our team meeting schedules, without which none of the above would be possible.

Besides, working remotely gives you something money can’t buy: You can work from wherever you like! As long as you meet your daily hour commitment in a responsible manner, it doesn’t matter if you do it from your flat in Frankfurt, a cabin in Bali or the base camp of Everest.

The availability of this freedom of time and workspace leads to a vastly improved work-life balance, which in turn results in a happier and more motivated workforce.

The Keys to Remote Work

So, if there are only advantages to be had, then why are there still many companies that are suspicious of telecommuting and remain stuck in a rut of their old “mind numbing” 9 am to 5 pm work routine? The answer is simple: they don’t know how to do it. To change to this type of work, the correct transition strategy must be implemented. Let’s say instead of your usual work routine you suddenly decide that your employees are going to work from home. It may be harder than you think.

However, if you familiarize yourself with how it works in other companies and prepare yourself for these changes, then it will end up being the best decision for your business that you have ever made. Your employees will be happier and more involved, productivity will increase, and on top of that, you will save money.

Below we will outline the 8 things that you must adopt, no matter what, if you want remote working to be a success for your organization.

1. Planning your day

Planning your day is a vital element that is equally important whether you work from home or from an office. Sorting your tasks in order of most to least important and prioritizing them accordingly is key. We work with 3 types of tasks: Must, Intend and Like. Every day, each team member is given a Must task, an Intend task and a Like task. The Must task is the most important task of the day. Failure to complete it will signify that your day has been unproductive. If you finish your Must task, you move onto your Intend task in order of priority, and only by completing that can you move onto Like, the least important of all.

2. Task management tools

Personally, we use Asana as a fundamental part of our day-to-day activity. Asana is a software that facilitates the planning, management and monitoring of the tasks of each team member. Thanks to this, we can access each other’s task calendars, add or remove tasks and classify them by projects. Another key feature is its ability to assign deadlines to the tasks, which makes overall organization much easier. It’s also aesthetically pleasing and is simple to use. Without a doubt, it’s an essential tool if you work remotely.

3. Activity records

This step is essential when you have no physical office, and all the work is carried out remotely. The most important work activities of each individual’s daily duties must be meticulously recorded. This way we can facilitate both new additions to the team and allow the rest of our colleagues to be able to help out in any of our tasks. All of this information must be freely available on an intranet system that all staff members can access and search. For example, if an employee takes a holiday and someone else has to take over their role, they can simply access the intranet to find out exactly how the task should be carried out.

4. Fluid communication

When you work from home and carry out projects in conjunction with others, a key element to success is the fluidity of communication that exists between the parties. To ensure communication runs smoothly, there are tools such as Slack where you can create user groups that each team member can access and interact with one another as if they were in the same office. Slack allows you to start conversations, ask questions and resolve any doubts. Also, it can be synchronised with Asana so that specific tasks can be assigned to relevant conversations. Communicating by email is a popular choice for the majority of people, but it is far from the most effective way to communicate when it comes to working remotely. Running Asana and Slack together in tandem allows total uninterrupted communication with your team, and your office building will never be missed.

5. Weekly meetings.

Naturally, this is key to maintaining contact with all the members of the organization. In these meetings, the rest of the team is updated regarding the status of their assigned tasks. In addition, future strategies are discussed, and any important issues are shared. At Share PLM we have meetings like these every Tuesday and Friday.

6. Time tracking

When you work in an office, and you know you have to keep to a strict time schedule, you’re never really aware of the exact time required to finish a specific task. You can only picture the end of your working day so you can go home. When you work from home with total flexibility this changes. It is often important when planning our work day to know exactly how many hours it takes us to perform certain tasks. That’s why we use Toggl, a time tracker that is downloaded directly onto the browser bar and like Slack, can be synced with Asana. Before starting any task, simply click on the button to start the time tracker. Once completed, click the timer again to finish. Toggl allows us to quickly assess which tasks we need to spend more time on and which ones will take less time.

7. Virtual café

As a company with extensive experience in telecommuting, one of the most difficult aspects for us is having our workmates so far away, not only for work-related reasons, but for personal ones too. Sometimes working alone at home for long hours can be hard and we miss the little things like our Monday morning chats or coffee breaks where we catch up on the weekend. Those are moments when we get to socialize and forget about work for a minute. To bridge this gap between us, we have a daily meeting that we call “virtual café” and whose attendance is optional. It is always held first thing in the morning, and we chat for about 15 minutes about anything other than work. It’s a great opportunity to get to know the team a little better when you have no daily physical contact.

8. Meeting in person

If possible, organizing a meeting where all the team members can get together in person is a fantastic idea. In our case we generally get to see each other 3 times a year, each time in a different place (usually where one of the team members lives). The team gets to spend 2 or 3 days together sharing our viewpoints and amusing stories from our daily lives while developing a business strategy to follow. The result is always very positive, and it allows us to form deeper bonds with our workmates, especially when the beers come out at the end of the day!

What about you? Are you working remotely?

Please share with us what you and your team usually do to make your work easier!

What Is Blockchain and Why Does It Matter To PLM?

PLM Blockchain

With the words “blockchain” and “Bitcoin” on everyone’s lips these days, it seems you risk seeming outdated if you can’t discuss it in your after-work weekly catch-ups or at home during dinner.

Whether you’ve heard about it at your most recent conference, at the office, or even in your local newspaper, you probably found yourself in the same situation we did—confused about what’s behind this “dark and secret” highly hyped technology.

In simple terms, blockchain is a shared public ledger on which cryptocurrencies like Bitcoin rely. But if blockchain is basically just the technology underlying Bitcoin, why are people so excited about it?

Because blockchain is much more than just Bitcoin, and the PLM sphere is beginning to recognize how it can reshape the way companies manage their products and information flows.

Blockchain puts the product at the heart of the data structure. It enables different organizations, with completely distinct data models, to collaborate across the product’s lifecycle.

And blockchain does this by maintaining a clean audit trail and an immutable data thread.

Untangling the blockchain wires

At its core, blockchain is a new way of storing and managing data. Think of a blockchain as a database that can be used to store and share records of value. However, it’s not like a traditional database, where information is stored in a central location.

Blockchain databases aren’t stored in any single location, like a bank or a cloud datacenter. Information in a blockchain is held on the individual computers of the people who use the database. That’s why blockchain is often described as a decentralized, distributed ledger.

This “distributed ledger” is used to keep track of transactions. In a blockchain, transactions are packaged into blocks. A “block” is a collection of transactions that are validated at the same time. Each block is then “chained” to the next block, in linear, chronological order, using cryptography. Cryptography is the underlying foundation of blockchain. It’s used to sign transactions, authorize exchanges of value and much more.

Blockchain allows consumers and suppliers to trade without intermediaries, to connect directly and remove the need for a third party.

Anatomy of blockchain

Each blockchain is made up of a series of blocks containing validated transactions.

Let’s do a deeper dive on each of the core components of a blockchain.

1. The Transactions

A business transaction is a transfer of value, such as goods, money or services between two parties. Every transaction involves:

  • A digital asset:Information stored in a blockchain can be anything – from money, stocks or even identities to digital goods such as art, music or even code!
  • Sender:The person who wants to send a digital asset. To initiate the transaction, the sender only needs to know the address of the person she wants to send transfer the digital asset to
  • Receiver: The person who receives the digital asset.  She needs to share the blockchain address with the sender each time a transaction is to be made.

Authenticating a transaction

Each transaction must be verified before it’s allowed to enter the blockchain.

The verification process is often done using two keys, a public key and a private key. Everyone can see the public key, but the private key is secret.

Public and private key pair

Blockchain uses PKI to authenticate transactions. Every blockchain user has a private and public key pair: a public and a private key to encrypt and / or sign data. Private keys are mathematically related to public keys. However, it’s impossible to extract a private key from a public key thanks to a strong encryption code base.

To better understand how public private key pairs work, let’s imagine that you have a mailbox. The public key is the address of the mailbox. A person can insert letters into your mailbox, but cannot retrieve them; you need to use your private key to open the mailbox and retrieve the letters.

Encrypting and decrypting is like locking and unlocking your mailbox. If anyone encrypts (“locks”) a transaction using your public key, only you can decrypt it (“unlock”) it with your private key. If you encrypt (“lock”) a transaction with your private key, anyone can decrypt (“unlock”) it. This action serves as a “digital signature.”

In the digital world, keys are just text strings with many digits. You can generate your own public and private keys using this online tool.

A cryptographic digital signature

Transactions are authenticated with digital signatures. A digital signature is created with a sign function that depends both on the transaction itself and on the private key.

Since the digital signature is created with your private key, no one can produce it but you. Additionally, by also using transaction data to create the signature, the sign function ensures that no one can copy the signature multiple times.

Whenever you want to receive a transaction, you share your public key with the sender. The sender locks the message with his signature and your public key, and then sends the transaction to you. Finally, you verify the transaction using your private key.

2. The Blocks

Transactions in a blockchain are stored in fixed structures called “blocks”. The important parts of a block are:

  • Block content: A validated list of transactions.
  • Block header: It contains key metadata about a block. There are four main sets of metadata in a block header:

–A block identifier: To identify a block, we use digital signatures that are generated using cryptographic hashes. And what are cryptographic hashes?

A cryptographic hash is a kind of ‘signature’ for a text or data file. A hash is a function that converts data of any size into a fixed-size string.

Whether the input is a single number, a long text or a digital file, the resulting hash is always the same size.

Converting a string to a signature is called hashing. Hashing only goes in one direction; you can’t take the fixed-length data output and recreate the string. Blockchains often use a SHA-256 hashing function, which generates an almost-unique 256-bit (32-byte) signature for a text.

This hashing online tool allows you to generate the SHA256 hash for any string:

-The previous block hash: Every block includes a link back to the previous block. This way we can access all previous blocks in a blockchain – they are linked together, and the database retains the complete history of transactions.

– A Merkle tree root: It’s a data structure that condenses the transactions in the block. A Merkle tree is built by hashing pairs of transactions until we come up with only one hash.

The node at the top of the Merkle tree is called the root. To come up with a Merkle root, we start from the bottom. We take the transactions and hash them. Then we pair those hashes, concatenate them and hash them again. And so on, until we come up with only one hash.

– Proof of work: Valid blocks contain the answer to a complex mathematical problem created using an irreversible cryptographic hash function. The only way to solve this mathematical problem is to guess random numbers. We’ll explore the proof of work more in detail a bit later.

If you want to explore the block’s content by yourself, have a look at one of the blocks in the blockchain.info public records.

3. The Blockchain

All confirmed transactions and blocks are included in the blockchain. To confirm pending blocks, blockchains use a process called mining. Mining prevents previous blocks from being modified, protects the neutrality of the network, and ensures consensus.

Now let’s explore the main actors in the mining process.

  • The miners:Transaction requests are sent to every computer on the network so the transactions can be validated. These computers are also called minersMiners validate new transactions and record them on the blockchain.

To validate the transactions, they must solve a difficult mathematical problem based on a cryptographic hash algorithm. This problem can only be solved by guessing the numbers. Every miner on the network competes to guess the solution to the problem.

  • The Proof of Work: The solution to this mathematical problem is called the Proof Of Work. When a block is ‘solved’, the transactions contained in the block are considered confirmed.

The first miner to solve this mathematical problem gets a reward, as the mining process uses a lot of computer power and electricity.

Exchanging value with blockchain

Imagine you have created a PLM API that gathers and presents data from CRM, ERP and PDM systems. You want to license and sell this application using a blockchain-based “PLM marketplace”.

You can use a token to identify your PLM API digitally. This token is stored on the blockchain and contains a link to the PLM API, stored somewhere on the cloud. Everyone on the PLM Marketplace blockchain agrees that the API belongs to you and that your API is officially licensed. If I want to buy your API, I sign the transaction with the API’s token, your public key and my private key. Once the network validates the transaction, it’s added to a block stored in the blockchain, and the PLM API’s license is mine. I can now use it freely; if someone wants to check the authenticity of the license, they can go back to the blockchain and audit track the transaction.

PLM and blockchain: what does the future hold?

It’s still early, but blockchain may well play a significant role in the PLM world. It has the potential to ease integrations, simplify migrations and enable end-to-end collaboration, and provide an accurate record of the “who, what, where and when” across the product’s lifecycle.

It promises to connect businesses whose applications, data models, part numbering and coding systems are different. Blockchain puts the product at the heart of the systems and allows them to focus on the data they need to collaborate on.

Other opportunities—in copyright protection, additive manufacturing, supply management, IoT data management, and sustainability—are on the horizon.

Although PLM vendors don’t offer anything off the shelf right now, many businesses, like MaerskToyota or Walmart are exploring ways to juice up blockchain for their products.

Nevertheless, it will likely take some time before the technology is in productive use. The technology is still in its infancy: lack of standards, scalability, incompatibility between different blockchains and the unfathomable amount of computing resources and energy used throughout the mining process are only a few of the challenges that blockchain needs to meet before it becomes commonplace.

Will PLM jump into the API fray?

Subscribe now and get a complet infographic for free.

By subscribing you agree to our terms

Single-Tenant Vs Multi-Tenant Hosting

Summary:  Multi-tenant and single-tenant hosting are two ways SaaS companies provide their services. Multi-tenant hosting is when many clients exist on the same software instance, sharing infrastructure, database, and/or an application server. It’s less expensive but comes with risk. Single-tenant is when a tenant doesn’t share anything. It’s more expensive and requires much more administration because it requires having a full software stack running for every client.

Software-as-a-Service (SaaS) products can have various levels of multi-tenancy. At the application server level, there can be a pool of application servers with load balances that services multiple clients. At the database level, there can be a database per tenant or a shared database across all tenants.

In this article, we highlight some of the key characteristics of single-tenancy and multi-tenancy deployment so you can choose the solution that’s best for you and your PLM plan.

Multi-tenant hosting

Multi-tenant hosting (also called shared hosting) is when a single instance of a database, application server, or infrastructure is shared across multiple clients. Each client pulls from shared resources, and each client’s data is essentially tagged and partitioned to keep it separate. One way to think about this is like an apartment building. A multi-tenant client has their own apartment with a key that only works on their door but shares the overall building.

Advantages of multi-tenant hosting

  • Multi-tenant hosting is less expensive

Because there are many clients on the same servers using the same underlying software, there are significant cost savings. 

In a single-tenant-only world, every client would need a full infrastructure of servers, routers, firewalls, etc. This drives up cost.

On a multi-tenant instance, these costs, along with things like system monitoring and servicing the deployment become shared, which makes it less expensive for everyone.

  • Multi-tenancy simplifies hosting

Multi-tenant solutions simplify hosting in two big ways:

  1. Protecting systems is generally easier because there are fewer interaction servers and infrastructure systems which reduce systems’ vulnerability exposures. For example, there are generally less servers to patch upgrades for vulnerabilities and monitoring within multi-tenant hosting.
  2. Upgrading software is a much easier task, because every client is running the same level of software it is easy to upgrade the deployment in one outage/maintenance period.

Disadvantages of multi-tenant hosting

  • Greater security risk

Strict authentication and access controls need to be in place to prevent clients from reading, writing, and updating each other’s data. What’s more, there is a risk that data corruption can propagate through all the clients in an instance – a risk multi-tenant hosts work hard to mitigate.

  • Serviceability and maintainability

Because everyone relies on the same codebase, updates to both hardware and software can affect all clients, and the maintenance period for any downtime will affect all clients at the same time.

  • Possibility of competing for system resources

Multi-tenant hosting shares system resources and as the client base grows dynamically, provisions must be made to add more resources as needed. While theoretically there can be conflict over these resources, there are protocols in place to make sure that resources are balanced across all clients: protocols like load balancers and “elastic” computing. These innovations ensure the checks and balances are in place to service all clients in response to dynamically changing resource demands.

Single-tenant hosting

Single-tenant hosting (also called dedicated hosting) is when a single instance of software and infrastructure are dedicated to a single client. Single-tenant hosting is like a single-family home where no other families live. The single family owns and utilize all things pertaining to the home.

Advantages of single-tenant hosting

  • Single-tenant hosting gives clients more control

Single-tenant hosting solutions can customize their software more than the multi-tenant solutions since they only service one client. An analogy is if you live in a house you can knock out a wall and no one else cares—you can do what you want and it won’t impact anyone living in the house next door.

  • Single-tenant hosting offers isolated security risks

Unlike multi-tenant agreements that share a database and server, single-tenant clients have these all to themselves. This makes security exposures to vulnerabilities and penetration attacks isolated to the single client and recovery may be expedited easily from a backup/restore or a disaster recovery system.

  • Single-tenant hosting can be choosier about software changes

Because they’re the only tenant, they can usually choose to accept or decline a software update. Most SaaS organizations do have limits on the end of life of old software before they either stop supporting it or force an update, but single-tenants can choose when and if they want to update providing it is before the software version expires.

They can also choose what features or add-ons they want, as well as request custom solutions built specifically for them. This is rare, since most vendors would still want to maintain a single code base with configurable options per client, but it is possible.

  • Dedicated systems services

There is better control of system capacity planning and monitoring since the client knows the traffic workload characteristics. When sharing system resources, it can be challenging to deterministically tell what resources are needed, since there is dependency on other clients.

Disadvantages of single-tenant hosting

The primary disadvantage of single-tenancy is cost.

  • There is no cost sharing for things like serviceability, system monitoring, and deployment.
  • Clients need to worry about their own data backup/restore and disaster recovery system as well as manage their own patching and updating (which means high IT costs).

It’s not just cost though. Single-tenant systems can be less efficient as well, first because they’re running on entire servers that might not be at capacity, and second because the underlying software is only serving one client and not benefiting from all the services provided by multi-tenant solutions.

Multi-tenant and single-tenant hosting: conclusion

Cloud solutions aren’t just for techy consumer-facing customers. They’re also a viable solution for enterprise organizations. Cloud hosting organizations and the SaaS businesses who work with them have put in a tremendous amount of effort to build solutions that work for security-conscious enterprises.

One solution that we frequently see in the enterprise is a hybrid model, where clients participate with a multi-tenancy SaaS solution in the cloud while maintaining a local on premise solution where they store sensitive intellectual property and ensure data sovereignty.

Like most tech, it’s less about choosing the best solution out of multi-tenant and single-tenant hosting, and more about choosing the product that’s right for you.

Download the FREE PLM Benchmark Checklist.

Organize a PLM Benchmark in 7 steps with this checklist.

By subscribing you agree to our terms

Terry is CTO of Upchain. Upchain is an intuitive cloud PLM solution that helps companies launch products faster.

Will PLM Jump Into the API Fray?

API PLM

What is an API and its relation to PLM?

Order dinner from your tablet. Track your activity with an app on your smartphone. Book a seat at the movies tonight with a click of a button.

APIs make all this possible, and they’re behind much of what we do online. They connect businesses, applications, data and devices so you can order pizza for dinner, listen to your favorite music while exercising or buy movie tickets in just a few clicks.

API stands for Application Programming Interface. APIs let applications talk to one another and exchange information. Technically, they’re program blocks that ease development and set up the routines, protocols, and tools needed to interact with an application.

How do APIs work?

APIs are like digital building blocks that are put together to provide a wide variety of features and functions. There are thousands of public APIs that can be used to do everything from checking a location and gathering social media information, to authenticating users.

Modern companies don’t build apps from scratch. They pick and choose among the available APIs to speed the development process, maintain a robust architecture and offer the best customer experience.

Let’s look at an example to understand how companies combine APIs to build apps.

Imagine you want to build a hotel app that helps users find available hotels in their location. The user selects where they want to stay and when, and the app provides a list of available hotels in that region. They can then use the app to select and book their favourite room.

How would you build this simple hotel app using APIs?

A basic combination of mapping, calendar, authentication and payment APIs would do the job:

The mapping API figures out what hotels are close by, while the authentication API lets loyal customers easily log in and quickly book a room. The calendar API checks out room availability and rates. And finally, the payment API allows users to reserve a room and settle their bill.

Understanding REST APIs

REST stands for REpresentational State Transfer and refers to an architectural style that uses HTTP methods and standardized web technologies. A REST API is a system where clients use a defined interface to interact with a web server via the HTTP protocol.

There are further types of APIs, such as Simple Object Access Protocol (SOAP) or Remote Procedure Call (RPC), but REST has achieved great popularity in recent years. This is mainly because REST performs well, is highly scalable, simple, and easy to modify and extend.

To get a sense of what a REST API is and how it works, imagine a café , a cashier, and a customer who wants to grab a cappuccino.

Here’s how our café works:

On one side, we have the client. Think of the client as the customer who wants to order a cappuccino for breakfast. The client could be a web application, mobile app, or smartwatch, or whatever interface wants access to the data.

On the other side, we have the cappuccino, the asset. Assets are typically stored in a database or database server.

In the middle sits our cashier, the REST API, receiving, processing, and handling requests and responses.

When the client submits a request – in this case, to get a cappuccino – the REST API receives the request, identifies the requested resource, grabs the resource and sends it all back to the client.

Anatomy of a REST API

Now that we have a sense of how a REST API works, let’s take our understanding a step further by taking a closer look at its main components.

The Client

The clients are the API consumers. They can be mobile apps, web browsers or embedded IoT devices.

Will PLM jump into the API fray?

Subscribe now and get a complet infographic for free.

By subscribing you agree to our terms

Request

A REST request has two essential parts: a method, and a URI and a body. In some cases, headers may be sent to specify information about the request. If the request is intended to write new information to the system, a body is used to convey that information.

The header primarily enables a user to access a resource. Headers are also used to set the language format and compression preferences.

The method is one of the standard HTTP operators, and the URI points to the resource you want to interact with. RESTful APIs use standard HTTP methods to perform four essential operations:

  • GET – view
  • POST – create
  • PUT – edit
  • DELETE – delete

The URL is the unique identifier for the resource. It’s like any other URL on the internet, except in this case it’s used to describe the resource in an application.

Finally, the body section is only sent to create (POST) and update (PUT) transactions.

API

The API is the gateway to the server where the assets are. It provides access to those assets the company wants to share. A well-designed API defines what it can provide and how to use it in a “sort of contract”. Available methods, query parameters, response format, request limitations, language support etc. should be part of that contract.

APIs also act as a bodyguard for the exposed assets. There are three main security measures that most APIs use: identification, authentication, and authorization. API keys are unique codes that are generally used to authenticate users and manage access.

Assets

Assets can be data points, programs or services that a company owns and wants to expose. They’re the bread and butter of any API. The ultimate goal of any API is to share assets. Assets can be anything a company wants to share—whether it’s data, services or insights.

Response

Every response comes back from the server with a status code indicating the success or failure of the action requested.

Responses in REST APIs typically use JavaScript Object Notation (JSON). The JSON format is compact and easy to transmit on slow networks.

Let’s return to our café to see these API components in action. The customer orders a cappuccino and the cashier creates a new item: an order. The client wants to add extra sugar to her order, so the cashier updates the order. When the client asks the cashier to tell her what she’s ordered, the cashier reads the order. Suddenly the customer realises that she’s left her wallet at home, so she cancels the order. The cashier then deletes that item.

APIs in action

To better understand how APIs work, let’s look at an API in action. We’ll use an API management tool called Apigee to visualize the examples’ requests and responses. Apigee is a platform that provides API design, management and support tools.

Let’s start with a simple call to the Google API. In this example, we want to dig out Madrid’s GPS coordinates.

Method: GET—This is the read method in HTTP.

URL: http://maps.googleapis.com/maps/api/geocode/json?address=Madrid —The URL points to the resource location. For GET methods, you can try typing the URL into the browser.

Body: none— A GET request doesn’t need a body, because we’re just reading a resource from the server.

By sending the previous request to the Google Maps server, we’ll get Madrid’s latitude and longitude:

Let’s now go ahead and use the Twitter API to send a tweet.

First, we need to authenticate ourselves with our Twitter account, so that Twitter knows that the request comes from us. Then we select the POST status update method from the Twitter API and set the parameters.

I’m sending out “Test tweet from Share PLM” from Madrid, using the latitude and longitude as input parameters we gathered from Google Maps’ API. We go ahead and click send, and check that the tweet has been posted correctly.

In the response, we also get metadata such as id or status about the tweet we just created.

We could then go ahead and delete the tweet we just created using the “destroy” method from Twitter’s API.

As you can see, there are lots of APIs you can tap into and play with. The idea is to mash several APIs together to create apps and consume data easily.

Will PLM jump into the API fray?

APIs have become the operating system of digital businesses, connecting previously siloed systems and applications. Designed to be flexible, nimble and scalable, APIs provide an efficient way to build the digital services that today’s connected consumers expect.

The API economy has the potential to transform the PLM arena as well.

Businesses can expose their services and product information to their customers. But also, internally, they can connect applications and share core system data through APIs. For instance, you might want to build internal applications that access core systems for reporting, business intelligence or visualization.

PLM software providers can enable businesses and partners to plug directly into their systems, easing integrations and exposing information and functionality. PLM APIs can potentially be open to internal developers, to partners or external developers. Products like Propel PLMOnShapeFusion 360OpenBOM or Aras are moving forward in this direction.

There might even be a big piece of cake for startups in the API economy. How about a completely new way of implementing PLM with on-demand, flexible APIs?

Bimodal PLM: An Attempt to Keep Pace With The Speed of Digitalization

Bimodal PLM

Seeking agility in Bimodal PLM

The need for greater speed and agility to keep pace with the evolving digital landscape has also hit the bimodal PLM sphere. Digitalization has led to competition that challenges traditional PLM applications and project implementation methods.

Product data is usually managed using large, traditional systems, with myriad methods and processes that require rigorous development and testing methodologies.

Established core systems can’t move fast enough to handle the rapid pace of changing technology and customer needs. Most traditional PLM processes and systems are not designed for speed and agility.

This lack of versatile and flexible applications and processing ability has led organizations to seek PLM agility via ”Bimodal IT”.

The rise of bimodal IT

The term Bimodal IT was coined in 2014 by Gartner, which defines it as:

“… the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed. Bimodal IT is the only sustainable solution for businesses in an increasingly disruptive digital world.”

Bimodal PLM

Bimodal is the practice of managing two separate but coherent delivery modes:

  • Mode 1focuses on predictability; its primary objective is to keep the business going.
  • Mode 2concentrates on exploration and targets innovation, experimentation and learning.

Bimodal PLM differentiates an organization’s traditional IT, which calls for deliberate stability and care, with innovation and digital customer-facing capabilities that require agility and speed.

Enabling experimentation with Mode 2

Bimodal IT enables organizations to test new technologies without risking business continuity. It calls for two distinct teams with well-defined areas of priority and focus. The “Mode 1 team” ensures that core systems run effectively and efficiently, which gives the “Mode 2 team” space to work on innovative and user-centric solutions that increase engagement and provide quick business value.

The key aspects of enabling experimentation through Mode 2 are:

Bimodal PLM mode
  1. Experimentation and learning

An organization driving innovation must be able to experiment with new ideas, new features, new user experiences, new business models, and new technologies and incorporate learning as a core value in the team’s culture.

  1. Customer-centric design

A focus on business outcomes, where the goal is to ensure that you are building the right product, by continuously validating the product’s vision with users

  1. Agile development and DevOps

Mode 2 pillars are agile and DevOps methodologies, which anticipate the need for collaboration between development and operations, and are grounded in flexibility and pragmatism in the delivery of a product.

  1. Cloud application and infrastructure design

Unknown scale requirements demand an elastic infrastructure. Cloud technologies enable flexibility in how apps are built, delivered, and managed, and usually yield business outcomes more quickly.

Potential drawbacks of “going bimodal”

While having an innovation lab in your IT organization might sound like a great idea, the bimodal approach introduces several challenges.

1. It can build a wall of confusion between IT groups

This dual-mode process may introduce a breakdown in communication and can potentially quickly build a troublesome wall between IT groups competing for funding, resources and, most importantly, attention.

2. Mode 1 may slow down Mode 2

There are lots of dependencies between user-focused solutions and traditional core systems. Usually projects running on Mode 2 require development of traditional systems, too. Making data and functionality available from core applications or creating a change request for master data are just two examples of the collaboration that may be required between Mode 1 and Mode 2. Mode 1 traditionally follows a rigid release cycle, and any planned changes need to be included in the plans. It’s important to get organised beforehand and reserve adequate resources on the core-systems side to make it work.

3. Failure to achieve long-lasting change

It’s easy to focus on developing innovative technologies, but don’t forget that development is just part of the equation. Working on a project end-to-end, developing and implementing processes and training the organization is just as important as creating a shiny new tool. Don’t ignore change management, and make sure there’s a proper handover to the people responsible for keeping the lights on.

4. Decreasing motivation in the Mode 1 team

Employees will probably quickly label Mode 1 as “boring” and Mode 2 as “exciting.” Some may not want to work on Mode 1 because of a perception that taking care of traditional applications is not as cool as building new things. That creates a dangerous culture that could lead to the Mode 1 team members becoming unmotivated and looking at the exit door.

Towards an end-to-end multi-speed IT

Is bimodal the path to help companies shed the lethargy associated with rigorously documented and monolithic PLM systems? Critics argue that deliberately bypassing established processes and teams to get things done might not be the right way to address the need for speed.

According to Forrester Research, Bimodal IT may provide some relief for CIOs in the short term, but it is not a strategy for long-term success. In a world where we are trying to break down silos, building a wall between the “innovative and fast” and the “legacy and slow” is probably not a long-term solution: it’s too rigid, and oversimplifies the solution to the real problem.

Learn from the experience of the chief digital officer at GE Digital, Bill Ruh. He’s run several projects on bimodal PLM and has concluded that to be a truly digital company, bringing together all digital capabilities under one group is the only way to make it work.

Some advocate for an evolution of bimodal, multi-speed IT. The core principle of multi-speed IT is to enable multiple delivery pipelines to support the various speeds and technology platforms that business requires.

Even supporters of this approach call for coordination across delivery pipelines. Identifying and understanding the architectural dependencies between the various projects executed by different teams is essential to ensure that the delivery and release of each application and innovation project is coordinated.

Looking for an in-depth Plant Information Management overview?

Access our FREE Plant PLM eCourse from PLM Partner.

Are Microservices The Next Big Thing For PLM?

PLM Microservices

Are microservices the next big thing for PLM? With social media, cloud and mobile technologies setting new benchmarks for speed, agility, and user-friendliness, today’s users expect similar performance and flexibility from Product Lifecycle Management (PLM) platforms.

While the business case for embracing more user-friendly and flexible technology is widely documented, software vendors have historically shied away from it, given the big changes to their core architecture and business models it demands. Opting for radical change in well-established and tested platforms, starting from scratch, is a risky path.

Risk, however, is an inescapable part of every business strategy, and the rise of cloud computing and open APIs has radically changed the game for the PLM industry. Even if traditional software vendors don’t want to make their applications less monolithic, technology and customers are forcing it.

PLM MICROSERVICES: RESHAPING ENTERPRISE ARCHITECTURE

The change in enterprise architecture technologies has been rapid and broad. The current array of cloud technologies is more powerful and less expensive than the previous generation. It enables companies to store, analyse and actively use data to make decisions and generate new business opportunities.

However, these new tools are also more complex and, in many cases, represent a challenge to well-established enterprise systems and platforms.

Microservices are the big trend in architecture and software development. They attempt to ease the speed and flexibility pain. And it seems they are also popping up in the PLM sphere: PLM gurus Jos Voskuil and Oleg Shilovitsky discuss in their blog posts “Microservices, Apis, Platforms and PLM services” and “Will PLM microservices eat PLM dinosaurs” how microservices are driving the PLM arena towards more agile and flexible architecture.

Microservices represent a fundamental shift in how businesses approach enterprise architecture and software development. In a nutshell, microservices remove business logic from applications and replace it with reusable modules of code that are completely independent from all other parts of the applications.

Let’s explore in greater detail what microservices are, and how they may well disrupt the big PLM platform vendors.

A BRIEF HISTORY OF ENTERPRISE ARCHITECTURE

Enterprise architecture origins: Monolithic applications

Before the 1990s, architecture was strictly monolithic. Monolithic architecture consists of a single application. As we reviewed in “Information systems in PLM”, there are three main building blocks in Monolithic Applications: a database, a server-side application and a client-side user interface. The server-side application handles HTTP requests, executes the business logic, retrieves and updates data in the database, and populates data to the browser.

In monolithic architecture, any changes to the system mean the whole application must be checked out from the version control system, the version must be updated into the server-side application and the whole application deployed.

Software in monolithic applications is not easy to replace piece-by-piece. The constituents of these applications are tightly coupled: if you change something in one part of the program, your action will probably affect other parts of the application. So if you don’t want to “break anything,” you’d rather have your developers have a good mental model of the whole application in mind when making any change.

The PLM platforms widely available today are mostly based on monolithic architecture. They usually support businesses in a proprietary way—that is, they are customized for individual organizations. Such designs make it difficult and expensive for companies to share, consolidate, and adapt to changing business realities.

Decoupling monolithic applications with Service-Oriented Architecture (SOA)

Decoupling monolithic applications with Service-Oriented Architecture (SOA)

Around 2000, there was a shift in enterprise architecture. Essentially, several pioneers arrived at the idea of assembling applications by using a set of building blocks known as components—some of which are available “off the shelf,” and some of which are built from scratch.

They sliced large, inflexible systems into a set of smaller pieces called services. These services exchanged data over the network. The communication between components happened usually via XML.

In SOA, the server defines what functionality can be accessed and what the requests and responses should look like. The interactions are verbs—things you can do with requests to the system—rather than being associated with specific system resources.

Main components of SOA

Web services

“Web Services” are programs that let one application talk to another application over the internet. Imagine you are in a restaurant and order a meal. The waiter takes your order, brings it to the kitchen, and at some point, you get your meal. In this example, the waiter is acting as a web service.

Web services are not tied to any programming language. Coming back to the example, if you speak Spanish and the cook in the kitchen only speaks German, the waiter would take your order in Spanish and translate it to German, so that you get the meal you ordered.

Enterprise service bus (EBS)

In service-oriented architecture, the different software components talk to each other by sending messages. To transport the messages between software components, SOAs typically use an enterprise service bus (ESB).

The SOA registry

The SOA registry is a library where information describing what each component does is stored. Developers and applications consult through the registry to learn which services exist and how they should be used.

Implementation challenges

The challenge with SOA implementations was that even though greater software modularity was obtained, the application pieces were still quite large. Furthermore, by strictly defining how data could be accessed, rigidity was again introduced. Some vendors developed proprietary ways to call methods over the network, which again led to tight coupling.

MICROSERVICES FOR AN ACCELERATED SOFTWARE DEVELOPMENT LIFECYCLE

Around 2010, the term “microservices” was coined. In a microservice architecture, applications are built as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

APIs are simple interfaces provided over HTTP. APIs are usually built based on REST interfaces, which make data accessible by using the JavaScript Object Notation (JSON) and the HTTP verbs PUT, GET, POST, and DELETE to create, read, update, and delete resources. These protocols and data formats are simpler to use than the web services standards used in the early SOA.

Microservices can be seen as an evolution of SOA. While SOA focused on sharing functionality from the applications, microservices focus more on making the data itself available, without restricting how to use it.

Microservices are very good at consuming data from different sources and transforming it for other applications.

BASIC PRINCIPLES FOR BUILDING MICROSERVICES

1. One microservice, one task

The main principle is that each microservice should only perform one task. In the example below, an application to handle orders is represented. In the microservice architecture, there’s a dedicated service for each task in the order-handling process.

2. Develop and deploy services independently

It should be possible to develop and deploy a microservice independently of all other parts of an application. In traditional monolithic architecture, a change to any single part of the application requires it to be redeployed entirely. With microservices, the application is decomposed into multiple services that can be redeployed independently.

3. Products, not projects

Traditionally, enterprise application development follows a project model – the target is to deliver a piece of functioning software which is then considered to be complete. With microservices, the idea is that software components should become reusable products that a team should own over their lifecycle.

4. Standard HTTP methods for service communication

In SOA, web services communicated using XML. Microservices have adopted modern communication methods such as HTTP, and requests and responses use JSON to describe the data.

BENEFITS OF USING PLM MICROSERVICES

Scalability and Agility through Reuse

In microservices architecture, large software projects are broken down into smaller, more independent modules. Software built as microservices can be broken down into multiple component services, so that each of these services can be deployed and then redeployed independently without compromising the integrity of an application. That means that microservice architecture gives developers the freedom to independently develop and deploy services.

Better fault isolation

Microservice architecture improves fault isolation and enables continuous delivery: if one microservice fails, the others will continue to work, and larger applications can remain unaffected by the failure of a single service.

Support for third-party development

Since microservices represent a small piece of functionality, it’s easier to understand and thus outsource application development. Another advantage is that code for different services can be written in different languages.

Shifting operations towards outcomes

With microservices, there’s a mentality shift. Operations are decomposed and defined in terms of the outcomes. It allows enterprises to visualize the work that functions, customers and suppliers are doing in terms of purpose and activity. Organizations can identify which activities should be kept in-house because they’re strategic and provide a competitive advantage, which ones might be outsourced, and which can be sold as a service for customers.

Better user experience for customer-facing capabilities

Modern web technologies are much more user-friendly than traditional monolithic applications. Making relevant data available through services enables us to use modern web presentation layers and enable end-user app customization, data querying and filtering.

BARRIERS WHEN SHIFTING TOWARDS PLUG-AND-PLAY MICROSERVICES

The coordination challenge: Making sure your microservices play nicely together

So far so good. You are wrapping your ugly code into plug-and-play microservices. But how exactly are you orchestrating all your microservices together to ensure the end to-end service you expect to be delivered? If microservices don’t compose cleanly, complexity is only being shifted from inside a service to the connections between them.

As well as that, it’s hard to figure out exactly where the service boundaries should lie and ensure that the microservices and traditional applications work well together. Any interface changes need to be coordinated between teams, compatibility needs to be checked when updates are made, and testing can easily become more complicated.

This time, who’s going to be the bad cop who helps prevent accidents? A solid governance board around microservices is required if we don’t want to end up creating a jungle, and once again constrain flexibility and agility.

Learn the basics of PLM.

Join our FREE 7-Day email crash course, and explore the building blocks of Product Lifecycle Management.

When reuse simply doesn’t happen

The first step to reusing microservices is creating transparency. How do the guys from purchasing know that the guys from manufacturing have already developed an order-handling service? Is there someone orchestrating enterprise service libraries and making sure they are well-understood and available?

We all know that developers usually would rather write new stuff than reuse or modify old stuff. It costs about three times more to create a reusable service than single-use code. Creating reusable microservices is an investment. Again, we come back to governance. Somebody needs to manage a library of corporate microservices and make sure it gets used.

When monolithic dinosaurs and lightweight microservices get married

Traditional monolithic systems, as PLM or ERP, aren’t very flexible. Some of these applications aren’t prepared for modern querying and data-retrieval processes, and you might end up creating big performance and reliability problems. To create effective microservices that use data from these core applications, a great level of expertise is needed. Messing up your enterprise core infrastructure traffic management, monitoring and performance isn’t difficult if you experiment too much.

ARE MICROSERVICES THE NEXT BIG THING FOR PLM?

The ability to offer plug-and-play applications has become an important compe­t­itive factor. Fuelled by the convergence of social, mobile and cloud and the growing demand for flexibility and usability, companies need to be agile, flexible, and fast to meet customer expectations.

Jumping into this new model requires a combination of new mindsets, processes and technology. Microservices don’t guarantee a seamless journey, free from business concerns. But they represent a meaningful attempt to move towards a more flexible and modern IT, one that supports the “need-for-speed” in the age of the customer.

Whether microservices are yet another silver bullet remains to be seen. Some suggest a two-speed architecture to develop customer-facing capabilities at high speed while decoupling legacy systems for which release cycles of new functionality stay at a slower pace. However, sooner or later, traditional core systems as PLM will pick up the signals and harness change.

PLM vendors are pressured to step up and play a critical role in supporting businesses to navigate through this transition. Those who envision how the industry will evolve and act accordingly will have great opportunities to thrive and not get disrupted.

Will PLM jump into the API fray?

Subscribe now and get a complet infographic for free.

By subscribing you agree to our terms


Why Business Processes Are Important for PLM

PLM Business Process

WHAT IS A BUSINESS PROCESS?

A business process enables a company to describe who does what and in which order, which is crucial in a PLM plan. A process is a series of tasks performed in sequence with clearly defined inputs, intended to deliver an output. The output can be a service, a product, or some other organizational goal. By combining all the company’s business processes, we can describe how it operates.

You can compare a business process with a recipe. Imagine you’re at home and want something other than a bowl of pasta or scrambled eggs for dinner. You’re in the mood for a treat, and look for a creamy gorgonzola risotto recipe. You download a recipe from the internet, and follow it to create the risotto dish, based on the written directions. No one’s there to supervise you or provide tips.

A good recipe must be specific. Think again about your gorgonzola risotto. You had the recipe, but the risotto didn’t come out as planned. Why did that happen? Perhaps the recipe’s list of ingredients was not specific enough. Maybe the descriptions provided in each step were too vague.

Download the FREE PLM Benchmark Checklist.

Organize a PLM Benchmark in 7 steps with this checklist.

By subscribing you agree to our terms

WHY DO BUSINESS PROCESSES MATTER TO A COMPANY?

Business processes are key to describing how a company gets stuff done. Paying a supplier’s bill or placing an order for a customer are both business processes. Good companies have documented processes. They enable consistent, high-quality outputs.

By documenting their processes, companies can expand quickly. Companies that seek to expand through mergers and acquisitions need well-documented business processes to ease integrations and support joint business operations.

Processes are also crucial for effective knowledge management. They can be used to teach new employees to perform required tasks and achieve desired outcomes.

WHY ARE BUSINESS PROCESSES SO IMPORTANT FOR PLM?

Business processes enable companies to develop, sell, deliver and support their products effectively. The quality of the processes across the product lifecycle strongly influences a product’s success. Waste in these processes can result in slower product deliveries and quality issues.

HOW DO WE DEFINE A PLM BUSINESS PROCESS?

Business process management is an overall approach that helps to promote efficiency in a company. It involves documenting existing processes (Process Mapping), defining the future process (Process Modelling), implementing the defined processes (Process Deployment) and measuring the process (Process Monitoring). Business process mapping, modelling and monitoring are main activities in PLM.

We use visual representations to show the way things work. In each of the steps there are activities, roles, deliverables and metrics that accurately define the tasks that need to be performed.

PLM business processes

1. Business process mapping:

To create a business process map, we can use a process flow diagram containing activity “swimlanes”. The flow diagram shows a “swimlane” for each role and indicates the activities and events that each role executes. In this phase, the diagram shows the current situation. The following example represents the sales process of a small company.

2. Business process modelling:

During this phase, we create an improved model for the processes the company wants to optimize. A model serves as a common framework for discussion and communication. It helps people understand how the process will work and where the optimizations come from. The following example represents the optimized sales process of a small company, after remodelling.

3. Business process deployment:

After defining new processes, we need to implement them. In order to productively implement the changes, a training and support plan must be in place. After implementation, we move to constant business process management. But it is crucial to follow up the process and ensure we get adequate feedback: Does the new process work well? Do we need to adjust it?

4. Business process monitoring:

A good business process must be measurable. We use Key Performance Indicators (KPIs) to measure the process efficiency and implementation. A KPI is a metric, a quantifiable attribute that helps us describe the performance. It needs to be something that can be measured. KPIs help a company set targets and monitor the implementation progress and efficiency.

WHAT DEFINES A GOOD BUSINESS PROCESS?

It is important to keep business processes at the right level of detail. Companies sometimes mix up processes and methods. A process is a sequential set of high-level activities, with clear inputs and outputs. The methods define how to perform the actions involved in those process steps in the related information systems. Methods are more detailed, and system-specific.

A good business process is one that is clear-cut, well documented and easy to understand. The activities of a process need to be crystal clear, as do the roles of the employees and the information they use and create. Anything that isn’t clear enough will lead to confusion and waste.

Processes must be measurable and manageable. In other words, we need to be able to monitor, using data, to know whether the process is doing well or struggling. Business processes are the building blocks of any great organization and its PLM plan. If you want people to work the “right” way, you need to define it in a business process. Well-managed business processes are a powerful corporate strategic asset.