With the conclusion of Run 2 in 2018, the LHC has now recorded a wealth of data well exceeding 100 fb$^{-1}$. In conjunction with the substantial output from other recent experiments at facilities like HERA and FNAL, these LHC data are an opportunity, as well as a challenge, for particle phenomenology. Incorporating new measurements into QCD global analyses of nucleon PDFs is difficult due to the significant computational cost of fitting, the problem of identifying the highest-impact data upon which to concentrate efforts, and the theory requirements to suitably describe the data. While a number of numerical approaches like Bayesian reweighting exist to ameliorate these issues, we have recently developed a novel, complementary analysis framework, $\tt{PDFSense}$, to rapidly assess the potential constraining power of candidate data sets being considered for inclusion in PDF fits. The advantage of this method is its ability to make common-basis comparisons of many measurements simultaneously, using published error PDFs, and allowing a quantitative visualization of the origin and interplay of the pulls from various empirical data with low computational cost. In this capacity, $\tt{PDFSense}$ played an important role in identifying the highest-impact data from LHC Run I, and helped guide their implementation in the upcoming CT18 global analysis. In this talk, we use the $\tt{PDFSense}$ framework to provide an overview of the phenomenology and pulls of the LHC data, and examine the PDF constraints expected from the future HL-LHC, LHeC, and EIC programs.