In the course of the most recent few decades, chip multi-processors (CMPs) – also called multi-cores –have been the main architectural choice for computing systems ranging from high-end servers to gadgets. CMPs enhance performance through parallelism by permitting multi-programmed/threaded workloads to run concurrently on the available computing cores. However, these cores are not independent entities rather they share essential resources for their execution. Unlike the uni-processor environment, the energy consumption of an application running on a CMP depends not only on its characteristics but also its co-runners (applications running on other cores). In this work, we investigate an application's performance response to core/uncore frequency scaling and resource contention. Then we model the energy-performance trade-off, under the scenarios mentioned above, using machine learning.