Spark-submit全参数详解Usage: spark-submit  <app jar | python file | R file>
Usage: spark-submit --kill--master
Usage: spark-submit --status --master
Usage: spark-submit run-example  example-class
Options:
--master MASTER_URL        spark://host:port, mesos://host:port, yarn,
k8s://host:port, or local (Default: local[*]).
--deploy-mode DEPLOY_MODE  Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
程序运⾏的位置,可选(“client”、“cluster”)默认“client”
--class CLASS_NAME          Your application's main class(for Java / Scala apps).
应⽤程序要运⾏的class(适⽤于Java / Scala程序)
--name NAME                A name of your application.
程序的名称
--jars JARS                Comma-separated list of jars to include on the driver
and executor classpaths.
⽤逗号隔开的driver本地jar包列表以及executor类路径
--packages                  Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
逗号分隔的包含在driver和executor的classpath中的jar的maven坐标,
坐标格式groupId:artifactId:version
--exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
逗号分隔的指定在解析--packages时不包含的package,格式groupId:artifactId
--repositories              Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
逗号隔开的远程repository
--py-files PY_FILES        Comma-separated list of .zip,.egg, or .py files to place
on the PYTHONPATH for Python apps.
⽤逗号隔开的放置在Python应⽤程序PYTHONPATH上的.zip,.egg,.py⽂件列表
--files FILES              Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed (fileName).
⽤逗号隔开的要放置在每个executor⼯作⽬录的⽂件列表,可以通过
<(fileName)访问执⾏程序中这些⽂件的⽂件路径
--conf,-c PROP=VALUE      Arbitrary Spark configuration property.
任意Spark配置属性
--properties-file FILE      Path to a file from which to load extra properties.If notsubmitting
specified, this will look for f.
设置应⽤程序属性的⽂件放置位置,默认f
--driver-memory MEM        Memory for driver (e.g. 1000M, 2G)(Default: 1024M).
每个driver使⽤的内存(例如 1000M, 2G)(默认: 1024M)
--driver-java-options      Extra Java options to pass to the driver.
driver的其他java选项
driver的其他java选项
--driver-library-path      Extra library path entries to pass to the driver.
driver程序的库路径
-
-driver-class-path        Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
driver程序的类路径,⽤--jars添加的jar将⾃动包含在类路径中
--executor-memory MEM      Memory per executor (e.g. 1000M, 2G)(Default: 1G).
每个executor使⽤的内存(例如 1000M, 2G)(默认: 1G)
--proxy-user NAME          User to impersonate when submitting the application.
This argument does not work with --principal /--keytab.
--help,-h                  Show this help message and exit.
显⽰此帮助消息并退出
--verbose,-v              Print additional debug output.
打印其他调试输出
--version,                  Print the version of current Spark.
打印当前Spark的版本
Cluster deploy mode only:
--driver-cores NUM          Number of cores used by the driver, only in cluster mode
(Default: 1).
driver使⽤的核数
Spark standalone or Mesos with cluster deploy mode only:
--supervise                If given, restarts the driver on failure.
失败后是否重启driver
--kill SUBMISSION_ID        If given, kills the driver specified.
结束指定的driver
--status SUBMISSION_ID      If given, requests the status of the driver specified.
请求指定driver的状态
Spark standalone and Mesos only:
--total-executor-cores NUM  Total cores for all executors.
所有executors使⽤的总核数
Spark standalone and YARN only:
--executor-cores NUM        Number of cores per executor.(Default: 1 in YARN mode,                              or all available cores on the worker in standalone mode)每个executor使⽤的核数,默认是1
YARN-only:
--queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
要提交给的YARN的队列名,默认“default”
--num-executors NUM        Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
要启动的executor数量
如果开启了动态分配,则executor的初始数量将⾄少为NUM
--archives ARCHIVES        Comma separated list of archives to be extracted into the                              working directory of each executor.
逗号隔开的被每个executor提取到⼯作⽬录的档案列表
--principal PRINCIPAL      Principal to be used to login to KDC,while running on
secure HDFS.
secure HDFS.
在开启安全的HDFS上运⾏时⽤于登录KDC的主体
--keytab KEYTAB            The full path to the file that contains the keytab for the                              principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache,for renewing the login tickets and the
delegation tokens periodically.
keytab的全路径

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。