From Scala to Java 1.8 -
i write spark programme parses csv log file, splits words separator ";" , creates object, who's attribute values words located on specific positions. code in scala having problem in translating java 1.8 (i utilize lambda expressions in java).
class="lang-scala prettyprint-override">val file = sc.textfile("hdfs:/../vrlogs.csv") class vrevent(val eventtimestamp: string, val deviceid: string, val eventtype: string, val itemgroupname: string) val vrevents = file.map(_.split(';')).filter(_.size == 32).map(a => new vrevent(a(0), a(1), a(6), a(13)))
i not sure how translate part java: .map(a => new vrevent(a(0), a(1), a(6), a(13)))
.
i tried (without filter part):
class="lang-java prettyprint-override">javardd<string> records = lines.flatmap(s -> arrays.aslist(s.split(";"))).map(a -> new cdrevent(a[0], a[1], a[6], a[13]));
assuming lines
stream<string>
:
list<cdrevent> events = lines .map(s -> s.split(";")) .filter(a -> a.length == 32) .map(a -> new cdrevent(a[0], a[1], a[6], a[13])) .collect(collectors.tolist());
map each line string[]
, filter out arrays not of length 32, map each string[]
cdrevent
, , collect them in new list.
java scala lambda bigdata apache-spark
No comments:
Post a Comment