Advances in technology have led to a world where large-scale data collection is ubiquitous. However, traditional techniques for processing data are not designed for such large-scale data sets, and are thus quickly becoming outdated. As a result, there is an immense demand for efficient, scalable, and robust algorithms for solving linear systems of the form Xβ = y. Such problems have a wide range of applications including medical imaging, finance, machine learning, and data analytics. Stochastic iterative algorithms have become an increasingly popular approach for dealing with large-scale data. The focus of this work is on a particular suite of algorithms, sometimes referred to as row or column-action methods. Two examples of such methods include the Randomized Kaczmarz algorithm, which uses rows of X, and the Randomized Gauss-Seidel algorithm, which uses columns of X. Such methods utilize simple projections and require access to a single row or column of the matrix X in a given iteration. Because of this low memory footprint, these algorithms are excellent for problems with large-scale data. This work adapts, improves, and proposes new algorithmic designs and theoretical analysis for stochastic iterative methods under various settings including factorized linear systems, systems with missing data, and systems with sparse, corrupted signals.