The emerging cache coherent Compute Express Link (CXL) interconnect provides a practical way to disaggregate cloud memory resources from monolithic servers into memory pools with DRAM-level access latency. While DRAM-only memory pool improves the resource utilization and reduces the Total Cost of Ownership (TCO) for cloud providers, we investigate the possibility of applying cheaper SSDs to a memory pooling system to further reduce the cost of cloud servers without sacrificing the application's performance. In this study, we build a simulated CXL-enabled DRAM-SSD hybrid memory pool based on Linux and commodity hardware, and conduct performance evaluation by running representative cloud workloads which cover deep learning training, database, data analytics and video processing on the testbed. The evaluation results show that a hybrid memory pool can potentially reduce memory cost while maintaining the same level of application performance for computation-intensive applications. For example, with memory overcommit ratio of 2, the performance degradation of training ResNet50 on ImageNet dataset is only 2.68%.