Integrators of Salesforce & Mulesoft

Microservices versus APIs

microservices legos

I have always been a firm believer that there is no such thing as a dumb question. Many times I have outed myself as the dumbest guy in this room with a simple question, and in the process, I hope I have been able to enlighten others.

“What is the difference between an API and a microservice?”

I was recently asked this question by a senior banking executive. She went on to explain that some of her stakeholders were struggling with the difference, and asked if MuleSoft could help. Part of my job as a Client Architect is that of a translator––to help business folks make sense of technology mumbo jumbo. I need to help people understand not just what this tech jargon means, but how it might bring value to their business. So naturally, I was up to the task. Answering this question also helped place Anypoint Platform in this overall context––a context worthy of sharing in this post.

What are microservices? 

At MuleSoft, we define microservices as an architectural pattern for creating applications. Under this pattern, applications are structured as a collection of loosely coupled services. This is distinct from traditional applications, or monoliths, that are structured as single self-contained artifacts. Think of a microservice-based application as one built from Lego bricks, and a monolith as one built from a single slab of concrete.

In its purest form, an individual microservice encapsulates some atomic business function, such as “CreateOrder.” This function could be implemented using any programming language (.NET, Java, Node.js). Individual microservices can be re-used across different areas of the enterprise, or even externally by partners and customers.

A full microservices architecture can potentially encompass thousands of individual microservices. In order to support the management of a large number of microservices, a fully-formed microservices architecture will require support from other technologies and processes. This includes cross-cutting services, APIs, CI/CD, and containerization––technologies that are all considered integral to a microservices architecture.

Benefits of microservices

The primary benefit of microservices is that they have the potential to increase agility and productivity for implementing enterprises. New business services are developed through a combination of new microservices and the reuse of existing microservices. The resulting new service can be rapidly deployed and tested in an automated cycle that will speed time to market for new business initiatives.

Rapid deployment means that business stakeholders can verify in real-time that an application meets its intended business objective. Once business objectives have been verified, then security, resiliency, and scalability can be dynamically configured at the microservice-level. This encourages operational flexibility and infrastructure optimization.

Challenges of microservices

Whilst microservices offer the potential to speed delivery of business initiatives, they also present challenges. The complexity of a large-scale microservices architecture cannot be overstated. If microservices are not established and managed correctly, they can lead to slower delivery speeds and poor operational performance.

In developer-led enterprises these challenges can manifest in the need for teams to spend time building cross-cutting supporting services. The tendency can be to look toward rolling-your-own cross-cutting services on-top of a PaaS. PaaS vendors, such as AWS and Azure, provide discrete components of a microservices architecture such as API management, message-oriented middleware, and service directories. These require developers to wire components together to form the foundational services of their microservices architecture. This effort diverts teams from building business services to building supporting services instead.

Enter Anypoint Platform 

Bootstrapping a microservices architecture requires addressing the challenges of scale outlined above. This is where Anypoint Platform fits in by providing a solution and a suite of services to allow developers to build, manage, and reuse microservices.

The role of APIs in microservices

APIs are standardized wrappers that create an interface through which microservices can be packaged and surfaced. This makes APIs the logical enforcement point for key concerns of a microservices architecture such as security, governance, and reuse. Because APIs are able to house these concerns, they are considered a foundational component of a microservices architecture.

How Anypoint Platform fits in

MuleSoft has built on this concept of the API as a foundational component by developing Anypoint Platform, a solution that enables the development and delivery of the API layer that describes and enriches enterprise microservices. Anypoint Platform provides critical cross-cutting services to these microservices, including security, governance, and reuse.

Additionally, Anypoint Platform can be used to implement full stack microservices, often referred to as integration microservices. Integration microservices combine orchestration between business microservices with connectivity to legacy assets. Integration microservices form a connective tissue across a microservices architecture.

Go forth and conquer the world of microservices 

Microservices represent the next frontier in application development––allowing businesses to rapidly deliver new initiatives. Key to this is the provisioning of a pre-packaged suite of services that allow for the implementation of APIs and a framework for the management of large-scale microservices architectures. MuleSoft provides such a solution – Anypoint Platform – that enables enterprises to embrace microservices today.

Learn more about best practices for microservices and microservices patterns.

Source: https://blogs.mulesoft.com/biz/microservices-biz/microservices-versus-apis/

10 ways Mule 4 will make your life easier

mule 4 launch

By now, you have probably heard a lot about how Mule 4 makes it easier to leverage the power of Mule in your integrations. In fact, many of our customers are already adopting Mule 4––providing great feedback about how they can on-ramp new developers much faster.

But we don’t want you to just take our word for it, so let’s see at 10 quick examples of Mule 4 features in action!

1. Seamless access to data

In Mule 4, we’re leveraging DataWeave not only as a transformation language, but as our expression language as well. This means that you can harness the power of DataWeave in any of your expressions, no matter what component you’re using it in. This is particularly handy when accessing data. Suppose you have a flow in which the following JSON payload is posted through HTTP:

{
persons: [
{
name: John Doe,
age: 35
},
{
name: Jane Doe,
age: 32
},
{
name: John Jr.,
age: 7
}
]
}
view rawpersons.json hosted with ❤ by GitHub

This is how a Mule 4 flow, which iterates through those persons, looks like:

<flow name=prersons>
<http:listener path=person method=POST config-ref=httpListener />
<foreach collection=#[payload.persons]>
<choice>
<when expression=#[payload.age > 21]>
<logger message=#[‘$(payload.name) is an adult’] />
</when>
<otherwise>
<logger message=#[‘$(payload.name) is a minor.’] />
</otherwise>
</choice>
</foreach>
</flow>
view rawpersons.xml hosted with ❤ by GitHub

So what makes the above features great?

  • At no point do you have to worry about the fact the data is in JSON format. Mule figures that out automatically and knows how to access that format. In fact, the exact same flow works in a similar manner even if the data structure was in XML or Java.
  • A <foreach> statement can now split records from a JSON array. You no longer need to add transformations just to make sure that <foreach> obtains a Java collection.
  • You can leverage DataWeave to access data both in the <foreach> and <choice> components. Again, you are only required to know the structure of the data, never the format.

2. New error handling features

This feature has already been covered at length in this  blog post and you can read more great material about it in the documentation. However, I’d like to focus on not only how the “try..catch” semantics make it easier to deal with errors, but how Anypoint Studio helps you discover possible errors during the design time.

Below, you can see an image of a flow that uses the try scope to deal with different errors in different ways. What’s great is that when you add an error handler, Studio allows you to specify the type of error you’d like to catch. There’s a magnifying glass icon on the right corner, which you can click on to obtain a complete list of all the possible errors for that particular try scope. In the image below, notice how Studio has aggregated the errors from the different connectors, and how self-describing these errors are.

3. Repeatable streaming 

Repeatable streaming is another great feature we added, which we have previously discussed in blog posts and documentation. Repeatable streaming completely hides the concept of streaming from you. You don’t have to know which components perform streaming, or worry about streams being consumed more than once. You don’t even need to know what streaming is at all!

You will no longer need to enable or disable streaming features in connectors like FTP or Database, because Mule will automatically stream whenever it’s applicable. You also don’t need to add <object-to-string>transformers or avoid logging part of your message, as the full content of the stream will always be available––even if it is the stream is consumed in parallel like in the example below:

4. Triggers

Triggers are a new way of starting the execution of your flow. They are Message Sources that have common use cases built-in so that you don’t have to manually build them. For example, suppose you want to poll an SFTP folder for new files. This task would roughly imply that you:

  1. Set up a scheduler
  2. List the contents of the folder,
  3. Process the files, and
  4. Move/delete the processed files or update a watermark so that the files are not processed again in the next poll.

There’s a trigger in the new SFTP connector which already does all of the above in one single step, and you can read all about it in the documentation.

Connectors that currently support triggers include the File, FTP, FTPS, Database, and Salesforce connectors––and the list keeps growing!

5. Frictionless upgrades

It was critical for us to make it easy for users to move across versions in this latest release. We achieved this by adding classloading isolation, which ensures that changes in Mule Runtime and a connector’s internal implementations do not affect your application. We also achieved this by decoupling the release cycles of Mule Runtime with those of its modules.

For example, imagine that we release a new version of the HTTP connector with some new feature you’d like to use in one of your applications or a bug fix. Because connectors are now released and versioned separately from the runtime, you can just get the upgrade from Exchange and only update the connector you install. In other words, you will not need to move to an entirely new version of Mule Runtime, which would also impact other connectors and require more sanity checks.

The same logic works the other way around!. Imagine you want to take advantage of a new Runtime capability around management. You can now upgrade the Runtime knowing that all the connectors in your application will not change or require any upgrades.

6. Self-tuning capabilities

Performance and scalability are paramount––regardless of the use case. Mule 4 ships with a brand new execution engine. The reactive and non-blocking nature of this engine make it really scalable, using a short amount of threads.

We also gave the engine the ability to self-tune. You will no longer need to configure thread pools, threading profiles, exchange patterns, and processing strategies in order to get the most out of your flows. With this release, Mule 4 now analyzes runtime conditions and makes adjustments automatically.

7. Simplified message model

The Mule message has been simplified. Besides the payload, there’s now the concept of Attributes, which is a strongly typed object that holds metadata about that payload. For example, if your payload is a file, then the payload will hold the contents of that file, while the Attributes object will hold the file’s path, size, timestamps, etc.

If the message was an HTTP request, then the payload will hold the body, while you’ll find the request path, query params, headers, remote address, etc in the Attributes. All of these changes remove the need for the inbound, outbound, and session scopes in Mule 3, while providing strongly typed data that is easier to deal with and discover. You can find more information about the simplified message model in Part I and Part II of this blog series.

8. New connectors

We completely revamped most of our core connectors. They’re now more powerful and easier to use, and, most of all, they’re consistent. Before we had transport- and operation-based connectors, and now we standardized the latter in order to provide a consistent and predictable experience. This means that if you learn how to use one connector, you will know how to use them all. You can see some examples by looking at the following posts:

9. Simplified enrichment

We also simplified enrichments. Enrichers are useful when you want to execute actions to obtain a new piece of data, but you want to do so without losing the data you already have in your payload. To make this easy, Mule adds a target parameter to all operations which return data. You can use this parameter to redirect the output of that operation to a variable of your liking––preserving all other data in your current message. You can apply enrichers not only to connector operations, but to <flow-ref> as well. For example:

<flow name=getStockInfo>
<http:listener path=stock method=GET config-ref=httpListener>
<http:response>
<http:body>
<![CDATA[
%dw 2.0
output application/json
{
“ticker”: attributes.queryParams.ticker,
“price”: vars.price as Number,
“variation”: vars.variation as Number
}
]]>
</http:body>
</http:response>
</http:listener>
<http:request path=price config-ref=httpRequester target=price>
<http:query-params>#[{‘ticker’: attributes.queryParams.ticker}]</http:query-params>
</http:request>
<http:request path=variation config-ref=httpRequester target=price>
<http:query-params>#[{‘ticker’: attributes.queryParams.ticker}]</http:query-params>
</http:request>
</flow>
view rawtarget.xml hosted with ❤ by GitHub

10. Migration tools

We also want to make the migration to Mule 4 as easy as possible. We recently announced the Beta release of the DevKit Migration Tool. This tool makes it really easy to migrate a Mule 3 DevKit project into a Mule SDK project that can be used with Mule 4. We’re proud to announce that the tool is now officially GA. Please check out the documentation for more information and instructions.

We’re also really happy to announce that a Mule 3 to Mule 4 migration tool will be available soon. This tool will take a Mule 3 application and transform it into a Mule 4 application. Although this tool will be able to automatically migrate a large number of use cases, it won’t be able to migrate everything.

The tool will generate a migration report, which will give users clear insights into all components that could not be migrated automatically. At the same time, the tool will also provide clear instructions on how to complete the migration yourself.

Conclusion

Overall, making Mule easier to use and learn was the driving principle of this release. Join the customers that are already taking advantage of these improvements by trying Mule 4 today!

 

Source: https://blogs.mulesoft.com/dev/mule-dev/10-ways-mule-4-will-make-your-life-easier/