导言
无服务器范式代表了应用程序和 Web 开发人员与基础设施、语言运行时和补充服务交互方式的显著转变。它通过抽象化并承担许多传统上影响代码在生产环境中运行方式的环境因素的责任,使您可以自由地专注于您主要关注的领域。
虽然无服务器计算有很多优点,但它也存在一些挑战,在您取得成功之前必须承认或解决这些挑战。在本指南中,我们将讨论当前这一代解决方案的一些主要痛点,并讨论它们的含义以及您可能如何解决它们。您应该对可能需要满足的要求以及可能遇到的障碍有更好的了解。
Prisma Accelerate 提供了一种处理无服务器应用程序和后端数据库之间连接问题的方法。它可以帮助管理来自无服务器函数的临时连接,以避免耗尽数据库连接池。 立即查看!
冷启动问题
使用无服务器时最常讨论的挑战之一称为冷启动问题。虽然无服务器的目标是允许按需立即执行函数,但在某些情况下可能会导致可预测的延迟。
什么是冷启动问题?
无服务器的一个主要卖点是在没有活动期间能够扩展到零。如果某个函数未被主动执行,则该函数的资源将被释放,从而将容量返回给平台并降低用户保留这些组件的成本。从成本角度来看,这是理想的,因为它意味着用户只需为代码实际执行的时间和资源付费。
这样做的不利之处在于,当资源完全释放后,下次需要执行时会出现可预测的延迟。需要重新分配资源来运行该函数,这需要时间。最终,最近使用过的“热”函数具有一组性能特征,而需要等待平台创建执行环境的“冷”函数则具有另一组性能特征。
开发人员如何尝试解决冷启动问题?
开发人员和平台已经尝试了多种方法来解决这个问题。一些开发人员安排“虚拟”请求,以保持与其功能相关的资源处于待机状态。许多平台在其服务中添加了一个额外的层,以允许开发人员自动保持资源处于待机状态。
这些解决方案开始有点模糊了无服务器环境的定义。当开发人员被迫为代码未主动执行时的待机资源付费时,这引发了一些关于无服务器范式一些基本主张的疑问。
预先分配资源的最新替代方案是通过切换到更轻量级的运行时环境来回避问题。像 V8 这样的运行时具有与传统无服务器截然不同的执行策略,并且能够通过使用不同的隔离技术和更精简的环境来避免冷启动问题。它们避免了冷启动问题,但代价是与依赖于更强大环境的功能的兼容性。
应用程序设计约束
无服务器模型的另一个基本挑战是它施加的应用程序设计。无服务器平台仅对可以在其约束范围内工作的应用程序有用。其中一些是云计
Designing a cloud-friendly architecture
The first requirement that applications must meet to use serverless platforms is to be designed in a cloud-friendly way. In the context of this discussion, at the very least this means that the application must be at least partly deployable to a cloud service that the other components are able to communicate with. And while it's possible to implement monolithic functions in your design, the serverless model best accommodates microservice architectures.
The upshot of this is that your application must be designed in part as a series of functions executed by your serverless provider. You must be comfortable with the processing taking place on infrastructure that you do not control. Furthermore, you must be able to decompose your application's functionality into discrete functions that can be executed remotely.
Dealing with stateless execution
Serverless functions are, by design, stateless. That means that, while some information may possibly be cached if the function is executed with the same resources, you can't rely on any state being shared between invocations of your functions.
You must design your functions to have all of the information they need to execute internally. Any external state must be fetched at the beginning of invocation and exported before finishing. Since functions may be executed in parallel, this also limits what type of state may reasonable be acted upon. In general, the less state that your functions have to manage, the faster and cheaper they will be to execute and the less complexity you will have to manage.
There are other side effects of the function's ephemeral nature as well. If your functions need to reach out to a database system, there is a good chance you may quickly exhaust your database's connection pool. Since each invocation of your functions can be executed in a different context, your database's connection pool can quickly drain as it responds to different invocations or tries to return resources to its pool. Solutions like Prisma Accelerate help mitigate these issues by managing the connection resources for the serverless instances in front of whatever connection pooling is in place.
Provider lock-in concerns
One challenge that is difficult to get away from with serverless is provider lock-in. When you architect your application to rely on external functions running on a specific provider's platform, it can be difficult to migrate to a different platform at a later time.
What types of lock-in can occur?
For applications built targeting a specific serverless platform, many different factors can interfere with cleanly migrating to another provider. These may result from the serverless implementation itself or from use of the provider's related services that might be integrated into the application design.
In terms of lock-in caused by the actual serverless implementation, one of the most basic differences between providers can be the languages supported for defining functions. If your application functions are written in a language not supported by other candidate providers, migration will be impossible without reimplementing the logic in a supported language. A more subtle example of serverless incompatibilities are the differences in the way that different providers conceptualize and expose the triggering mechanisms for functions within the platform. You might need to redefine how your trigger is implemented on your new platform if those mechanisms differ significantly.
Other types of lock-in can occur when serverless applications use other services in their provider's ecosystem to support their application. For example, since serverless functions don't handle state, it's common to use the provider's object storage offering to store any artifacts produced during invocation. While object storage is widely implemented using a standard interface, it demonstrates how the constraints of the serverless architecture can lead to greater adoption and dependence on the ecosystem of other available services.
What developers do to try to limit lock-in
There are some ways that developers can attempt to minimize the likelihood or impact of lock-in for their applications.
Writing your functions in a widely supported language like JavaScript is one of the easiest ways to avoid hard dependencies. If your language of choice is supported by many providers, it gives you options for other platforms that might be able to run your code.
Developers can also try to limit their use of services to those that are commodity offerings supported almost the same on each platform. For instance, the object storage example we used before is actually an ideal example of a service that is likely replaceable by another provider's offering. The more specialized the service you're depending on, the more difficult it will be to move out of the ecosystem. This is a trade-off you'll have to evaluate on a case-by-case basis, as you might have to forgo specialized tools for their more generic counterparts.
Concerns about lack of control and insight when debugging
One of the common complaints levied at serverless by developers evaluating it for future projects is the lack of control and insight serverless platforms provide. Part of this is inherent in the offering itself, as control of the infrastructure running the code would, necessarily, disqualify the service from the serverless category. Still, developers are often still apprehensive about deploying in an environment that limits visibility and control, especially when it comes to diagnosing issues that might affect uptime and impact production.
What types of differences can developers expect?
The promise of the serverless paradigm is to shift the responsibility for everything but the code itself to the platform provider. This can yield many advantages in terms of operations overhead and simplifying the execution environment for developers, but it also makes many techniques and tools that developers might typically rely on either more difficult or impossible to use.
For instance, some developers are used to being able to debug by accessing the programming environment directly, either by connecting to a host with SSH or by introspecting the code and using data that is exposed by the process. These are not generally possible or easy in serverless environments because the execution environment is opaque to the user and only specific interfaces like function logs are available for debugging. This can make it difficult to diagnose problems, especially when it's impossible to reproduce locally or when multiple functions are invoked in a pipeline.
What options are available to help?
There are a number of different strategies developers can adopt to help them work within this more limited debugging environment.
Some serverless functionality can be run or emulated locally, allowing developers to debug on their own machine what they are unable to debug in production on their provider. A number of tools were designed to emulate common serverless platforms so that developers can recapture some of the diagnostic capabilities they might be missing. They can allow you to step through functions, see state information, and set breakpoints.
For debugging on the platform itself, you have to try to take advantage of all of the tools offered by the provider. This often means logging heavily within your functions, using API testing tools to trigger functions automatically with different input, and using any metrics the platform offers to try to gain insight into what may be happening in the execution environment.
Wrapping up
Serverless environments offer a lot of value in terms of developer productivity, reduced operational complexity, and real cost savings. However, it's important to remain aware of the limitations of the paradigm and some of the special challenges that you might have to address when designing applications to operate in serverless environments.
By gaining familiarity with the different obstacles that you might face, you can make a better informed decision as to what applications might benefit most from the trade-offs present. You will also be better prepared to approach these problems with a better understanding of how to mitigate them or avoid them through additional tooling or design considerations.
Prisma Accelerate 提供了一种处理无服务器应用程序和后端数据库之间连接问题的方法。它可以帮助管理来自无服务器函数的临时连接,以避免耗尽数据库连接池。 立即查看!