Semester of Graduation

Fall

Degree

Master of Science in Computer Science (MSCS)

Department

Computer Science

Document Type

Thesis

Abstract

Function as a Service (FaaS) is gaining admiration because of its way of deploying the computations to serverless backends in the different clouds. It transfers the complexity of provisioning and allocating the necessary resources for an application to the cloud providers. The cloud providers also give an illusion of always availability of resources to the users. Among the cloud providers, AWS serverless platform offers a new paradigm for developing cloud applications without worrying about the underlying hardware infrastructure. It manages not only the resource provisioning and scaling of an application but also provides an opportunity to reimagine the cloud infrastructure as more secure, reliable, and cost-effective. Due to the lack of standardized benchmarks, serverless functions must rely on ad-hoc solutions to build cost-efficient and scalable applications. However, with the development of the SeBS framework, we can test, evaluate and do performance analysis of different cloud providers. Various researches have been conducted to differentiate the serverless platforms among the cloud providers. However, there is no research conducted so far within the AWS Lambda service in ARM64 architecture and between its different CPU architectures (x86 and ARM64).

Thus in this thesis, we have analyzed the perf-cost, latency, and cold startup overhead for both x86 and ARM64 architecture. We have conducted a meticulous evaluation of the perf-cost analysis in different sections. Our results show that increasing the code size and complexity directly affects the perf-cost metrics in both x86 and ARM64 architecture. However, at each invocation, either cold or warm startup, ARM64 is performing better than x86. Furthermore, our work showed the behavior of cold and warm startups at each architecture for any specific workload.

Taking the viewpoint of a serverless user, we also conduct experiments to show the effect of complexity on memory usage at both x86 and ARM64 architecture. We found that each architecture consumes nearly the same amount of memory for any particular workload regardless of invocation methods -cold and warm. In addition, we observed that cold invocation and ARM architecture would be efficient configurations for any specific workload regarding memory usage. Our analysis also shows that the input size directly impacts perf-cost metrics. Regarding the latency, ARM64 needs less time than ARM64 irrespective of invocation methods. However, if we look closer, a warm startup’s latency is less than a cold one. Therefore, the most efficient configuration for any specific workload would be a warm invocation and ARM architecture.

Similarly, in the case of cold startup overhead, our results illustrate that for any specific workload, ARM64 has lower execution and provider time overhead than x86. However, these overheads decrease with the increment of complexity due to high memory consumption at higher complex workloads. Therefore, we can say that our work and results provide a fair and transparent baseline for the comparative evaluation of each AWS architecture. Overall, this thesis has provided us with a great learning opportunity in serverless computing assessment.

Date

8-8-2022

Committee Chair

Wang, Hao

DOI

10.31390/gradschool_theses.5648

Share

COinS