Trier
Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
Exemple de tâche de tri PySpark.
Exemple de code
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
[[["Facile à comprendre","easyToUnderstand","thumb-up"],["J'ai pu résoudre mon problème","solvedMyProblem","thumb-up"],["Autre","otherUp","thumb-up"]],[["Difficile à comprendre","hardToUnderstand","thumb-down"],["Informations ou exemple de code incorrects","incorrectInformationOrSampleCode","thumb-down"],["Il n'y a pas l'information/les exemples dont j'ai besoin","missingTheInformationSamplesINeed","thumb-down"],["Problème de traduction","translationIssue","thumb-down"],["Autre","otherDown","thumb-down"]],[],[[["\u003cp\u003eThis webpage provides an example of a PySpark sort job using the Dataproc service.\u003c/p\u003e\n"],["\u003cp\u003eThe code sample demonstrates how to use PySpark to create an RDD, sort a list of strings, and then print the sorted list.\u003c/p\u003e\n"],["\u003cp\u003eAuthentication to Dataproc requires setting up Application Default Credentials, as detailed in the provided link.\u003c/p\u003e\n"],["\u003cp\u003eThe page also links to further documentation for the Dataproc Python API and the Dataproc quickstart using client libraries.\u003c/p\u003e\n"]]],[],null,["An example PySpark sort job.\n\nCode sample \n\nPython\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Dataproc quickstart using\nclient libraries](/dataproc/docs/quickstarts/quickstart-lib).\n\n\nFor more information, see the\n[Dataproc Python API\nreference documentation](/python/docs/reference/dataproc/latest).\n\n\nTo authenticate to Dataproc, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import pyspark\n\n sc = pyspark.SparkContext()\n rdd = sc.parallelize([\"Hello,\", \"world!\", \"dog\", \"elephant\", \"panther\"])\n words = sorted(rdd.collect())\n print(words)\n\nWhat's next\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=dataproc)."]]