The following article belongs to the “Microservices” series (MICROSERVICES: GETTING STARTED, MICROSERVICE ARCHITECTURE: BENEFITS AND CHALLENGES) and aims to dive deeper into the day-by-day challenges laying behind the software development process of a microservice architecture.

Today the microservice architecture is a standard de facto and is often the preferred architecture to develop a Web Application. So if you are interested in using this kind of architecture, you certainly still have questions.

Are you relatively new to Microservice? Or have you already started the migration of your Monolith towards Microservices? Is Microservice Architecture more complex than the Monolithic one? Yes, definitely. And even more demanding if the AGILE and DevOps (in particular CI/CD) cultures are not mature.

Here are some challenges you might face during your journey!


The complexity of designing, developing, testing, and maintaining microservices grows exponentially with the quantity you need to keep. That is mainly due to:

  • Their diversity (in terms of functional and technical requirements)
  • The number of interactions among them and with external systems
  • The number of dependencies (concerning other microservices/systems and source code libraries and their updates).

So, think big and do not lose time reinventing the wheel – instead, save your budget for tools licenses



If you want to centralize and insulate the clients from the underlying architecture, simplify API and security management, etc., you may want to use the “API Gateway” pattern. This pattern has a lot of benefits, but it also increases the complexity. Just in terms of infrastructure, in addition to the gateway component itself, you will need a client load balancer and a discovery service.

If you go for the “Database per service” microservice pattern, each service should have its database and transaction management system, implying two main issues.

The first issue is that most client transactions need to access and join data spanning multiple services (e.g., you need data from various databases/tables and transactions established among several microservices).

The second issue is that you may end up having duplicated, partitioned or redundant data across data stores (due to analytics, reporting, archiving), threatening data integrity and consistency.

To cope with the first one, the data owned by a microservice should be private and only accessible through its API. Unfortunately, it will create new interactions between microservices and thus increase the development effort/time. However, since transactions are limited to a single microservice, processing errors may lead to inconsistencies. Adopting the Saga pattern will handle this issue by centralizing rollback through orchestration or by solving it asynchronously. Thus you have to accept an eventual temporary inconsistency before replication in database clusters.

To cope with the second one, you may want your microservices to use either:

  •  optimistic locking,
  • conflict-free data structures (allowing modifications without creating any conflict),
  • to write a new version of that object in the database,
  • adopt the Single-Writer pattern (define the microservice, which will be the only one who can modify that data).


Each interaction between microservices should be allowed only upon a successful authentication/authorization process based on roles and permissions using an API Gateway or custom solutions).

As Usual, it is much better to design your application and consider rights from the start (e.g., GDPR privacy regulations, TLS, encryption, key management, authentication). Introducing them later in the development cycle may cause high refactoring costs.


Dependency and upgrades management across different services and their functionalities is really critical and cyclic dependencies should be identified and resolved promptly.
Services with high coupling could lead to additional maintenance and create a risk of services calling each other in a circular matter infinitely which can bring the whole system down.
To avoid inconsistencies and reuse of already existing code, we would need to have and maintain shared library for shared code (this requires further dependency management).


Microservices may provide versioned APIs/endpoints (“*/v1/*”, “*/v2/*”, etc.). Then, teams members need to communicate efficiently about the different microservices changes and maintenance to coordinate releases of several microservices interacting among each other: you easily risk going “waterfall” (for example, change the first microservice in the first Sprint, then the change of the second one in another Sprint), slowing down development/integration, increasing effort and introducing feedback loops between teams. With this in mind, it is worth accurately designing “feature teams” to mitigate these issues.

Moreover, microservices transformation often involves shifting the competencies and decision-making power from managers and architects to individual teams.


On one side, the microservice architecture offers flexibility in choosing different programming languages or frameworks.

On the other side, maintenance will likely be more challenging and costlier. Developers may quickly lose the system’s extensive picture functioning, and you may bring more and more tools, servers, and APIs, which may lead to a lack of uniformity a less cohesive end product.

Communication between microservices is a tedious and complex burden to design (synchronous or asynchronous; different possibilities), implement, expand (without breaking the API contract not to impact clients and the API Gateway), maintain, document, and communicate (dependent services will upgrade). Impact detection due to a change in one microservices interacting with another is not straightforward (you do not have red errors in your IDE). Using graceful deprecation approaches is recommended.


Testing, in general, is more complex, the need for full coverage results may result in a lot of mock services to test small units, and longer because of several microservices integration: we may have both synchronous and asynchronous messaging techniques, interdependencies between microservices, issues in testing availability and resiliency (as part of non-functional requirements, like performance response time and throughput) which may even end up with randomly disabling servers to recreate possible fail-over architecture scenarios.

Testing an application or microservice often deployed (even multiple times a day) needs automated tests into the build and deployment pipeline.

Preferably, generate test data from anonymized production data to make more realistic and qualitative tests and run non-regression tests at each deployment.


You will need to take care of the deployment of each microservice singularly: this tedious operation may also require the coordination among multiple services in case of dependencies.¨

You may either opt for a Cloud-based deployment solution or custom and homemade ones.

As service instances IP change dynamically, a client should adopt a service‑discovery mechanism, like Istio or API Gateway), to request a service.


Given the complexity of a microservices environment and the complex dependency chains, failure is inevitable.

To ensure the overall availability, the developers should know all the ways each microservice (and the corresponding architecture) may fail and ensure that none of them can occur – if they do, they should not make the whole system fail. Hence the need to implement both internal and external failure handling mechanisms, robust resiliency testing, and restoring backups if needed. If you use infrastructure as a database, you should configure and deploy scalable database clusters with proper backup and recovery strategies while your storage provider recovers from the outage.

By using service meshes, it is possible to switch off microservices that are too slow or unavailable (through circuit breakers and fault tolerance frameworks, which result in offering a reduced functionality). They will limit the throughput and the resource usage (we may indirectly free resources like database connections, for instance). If a microservice needs to support multiple versions of its service simultaneously until all service consumers have moved on, we would need an adapter from the old API to the new API or running the old and the latest version in parallel (if the underlying database structures permit).


Debugging problems to find the point of failure will be more expensive and time-consuming because each microservice has its own set of logs to go through. When investigating issues whose you do not know the cause, we may have to work backward from status codes or vague error messages. You might add more logging to a particular service and redeploying, hoping that this time the issue will resurface with more context.

For this reason, we put in place a distributed tracing mechanism that assigns an identifier (“traceId”) to each request/response interacting with our microservice: this way, we can track the logs of all the methods calls during this request.

Finally, it is ideally preferable to proactively setting up monitoring solutions, such as APMs.


In terms of costs, you may observe an increment due to: the need for more computational resources (which is given by the provisioning of messaging middleware, microservices load balancing, CI/CD pipeline builds, test infrastructure); higher network latency; cost of team members having all the required skills (full-stack development, microservice pattern, databases, build, deployment, operations, support).


Stay tuned to this channel, and please feel free to reach out to us at info@smartwavesa.com if you would like to know more about how we can guide you along the challenging microservices journey!

Read also

- - -


Quarkus est un framework Java open source permettant le développement …


Spring Functions

  Spring Cloud Function is a project by the Spring …


Adopt Flutter ?

Introduction In this article, we tried to synthesize the different …