I'm working in my first large database (53,098,492,383 records). When I
select the db via something like
mydata <- sql("SELECT * FROM <table name>")
is "mydata" a SparkDataFrame, and do I work with SparkDataFrames like I
would regular df (per say); because I can't image I would ever create a 53
billion record df. I'm starting to acquaint myself with e SparkR package,
but I get confuse because it appears df and SparkDtaFrame are use
interchangeable. Or maybe not.