19
10162228344251606465677381838788919499
100103104105106107108109110113120123127130133134136148152155159160166167170174176179180188194197
Table of ContentsTable of ContentsPivotal HDB 2.3.0 DocumentationPivotal HDB 2.3.0 Release NotesPivotal HDB 2.2.0 Release NotesPivotal HDB 2.1.2 Release NotesPivotal HDB 2.1.1 Release NotesPivotal HDB 2.1.0 Release NotesPivotal HDB 2.0.1 Release NotesPivotal HDB 2.0 Release NotesApache HAWQ System RequirementsSelect HAWQ Host MachinesSetting up HDB Software RepositoriesInstall Apache HAWQ using AmbariInstall HAWQ from the Command Line (Optional)PXF Post-Installation Procedures for Hive/HBaseAmazon EC2 ConfigurationInstalling Procedural Languages and Package Extensions for HAWQInstalling MADlibInstalling PL/RUpgrading to HDB 2.3.xWhat is HAWQ?HAWQ ArchitectureTable Distribution and StorageElastic Query Execution RuntimeResource ManagementHDFS Catalog CacheHAWQ Management ToolsHigh Availability, Redundancy and Fault ToleranceGetting Started with HAWQLesson 1 - Runtime EnvironmentLesson 2 - Cluster AdministrationLesson 3 - Database AdministrationLesson 4 - Sample Data Set and HAWQ SchemasLesson 5 - HAWQ TablesLesson 6 - HAWQ Extension Framework (PXF)Running a HAWQ ClusterIntroducing the HAWQ Operating EnvironmentManaging HAWQ Using AmbariUsing the Ambari REST APIStarting and Stopping HAWQExpanding a ClusterRemoving a NodeBacking Up and Restoring HAWQHigh Availability in HAWQUsing Master MirroringHAWQ Filespaces and High Availability Enabled HDFSUnderstanding the Fault Tolerance ServiceRecommended Monitoring and Maintenance TasksRoutine System Maintenance TasksMonitoring a HAWQ SystemHAWQ Administrative Log FilesHow HAWQ Manages ResourcesBest Practices for Configuring Resource Management
© Copyright Pivotal Software Inc, 2013-2017 1 2.3.0
198201206210213217219220224231233235237243244248256260263264267268271272275276278279280282285287291294296305306307309326327332344348349350352357358359360361362363364365
Configuring Resource ManagementIntegrating YARN with HAWQWorking with Hierarchical Resource QueuesAnalyzing Resource Manager StatusConfiguring Client AuthenticationUsing LDAP Authentication with TLS/SSLUsing Kerberos AuthenticationConfiguring HAWQ/PXF for Secure HDFSConfiguring Kerberos User Authentication for HAWQExample - Setting up an MIT KDC ServerDisabling Kerberos SecurityOverview of HAWQ AuthorizationUsing HAWQ Native AuthorizationOverview of Ranger Policy ManagementConfiguring HAWQ to use Ranger Policy ManagementCreating HAWQ Authorization Policies in RangerHAWQ Resources and PermissionsSQL Command Permissions SummaryUsing MADLib with Ranger AuthorizationAuditing Authorization EventsHigh Availability and HAWQ RangerHAWQ Ranger Kerberos IntegrationEstablishing a Database SessionHAWQ Client ApplicationsConnecting with psqlHAWQ Database Drivers and APIsTroubleshooting Connection ProblemsDefining Database ObjectsCreating and Managing DatabasesCreating and Managing TablespacesCreating and Managing SchemasCreating and Managing TablesIdentifying HAWQ Table HDFS FilesTable Storage Model and Distribution PolicyPartitioning Large TablesCreating and Managing ViewsUsing Languages and Extensions in HAWQUsing HAWQ Built-In LanguagesUsing PL/JavaUsing PL/PerlUsing PL/pgSQL in HAWQUsing PL/Python in HAWQUsing PL/R in HAWQEnabling Cryptographic Functions for PostgreSQL (pgcrypto)Managing Data with HAWQBasic Data OperationsAbout Database StatisticsConcurrency ControlWorking with TransactionsLoading and Unloading DataWorking with File-Based External TablesAccessing File-Based External Tablesgpfdist Protocolgpfdists ProtocolHandling Errors in External Table DataClient-Based HAWQ Load Tools
© Copyright Pivotal Software Inc, 2013-2017 2 2.3.0
368369370371372373374378379380381382383384385386388389390391392393394395396397398399400401402403404405406407409410411412413415416417418419420421422423424425426427432
Using the HAWQ File Server (gpfdist)About gpfdist Setup and PerformanceControlling Segment ParallelismInstalling gpfdistStarting and Stopping gpfdistTroubleshooting gpfdistRegistering Files into HAWQ Internal TablesCreating and Using Web External TablesCommand-based Web External TablesURL-based Web External TablesLoading Data Using an External TableLoading and Writing Non-HDFS Custom DataUsing a Custom FormatImporting and Exporting Fixed Width DataExamples - Read Fixed-Width DataCreating External Tables - ExamplesHandling Load ErrorsDefine an External Table with Single Row Error IsolationCapture Row Formatting Errors and Declare a Reject LimitIdentifying Invalid CSV Files in Error Table DataMoving Data between TablesLoading Data with hawq loadLoading Data with COPYRunning COPY in Single Row Error Isolation ModeOptimizing Data Load and Query PerformanceUnloading Data from HAWQDefining a File-Based Writable External TableExample - HAWQ file server (gpfdist)Defining a Command-Based Writable External Web TableDisabling EXECUTE for Web or Writable External TablesUnloading Data Using a Writable External TableUnloading Data Using COPYTransforming XML DataDetermine the Transformation SchemaWrite a TransformWrite the gpfdist ConfigurationLoad the DataTransfer and Store the DataTransforming with GPLOADTransforming with INSERT INTO SELECT FROMConfiguration File FormatXML Transformation ExamplesCommand-based Web External TablesExample using IRS MeF XML Files (In demo Directory)Example using WITSML™ Files (In demo Directory)Formatting Data FilesFormatting RowsFormatting ColumnsRepresenting NULL ValuesEscapingEscaping in Text Formatted FilesEscaping in CSV Formatted FilesCharacter EncodingHAWQ InputFormat for MapReduceUsing PXF with Unmanaged Data
© Copyright Pivotal Software Inc, 2013-2017 3 2.3.0
433435437444460462466470477479498502503505506507510512513515516517526533536541542543547548551553555558559561562564566567569570571574577579586587589590592594595596597603
Installing PXF Plug-insConfiguring PXFAccessing HDFS File DataAccessing Hive DataAccessing HBase DataAccessing JSON File DataAccessing External SQL Databases with JDBC (Beta)Writing Data to HDFSUsing Profiles to Read and Write DataPXF External Tables and APITroubleshooting PXFQuerying DataAbout HAWQ Query ProcessingAbout GPORCAOverview of GPORCAGPORCA Features and EnhancementsEnabling GPORCAConsiderations when Using GPORCADetermining The Query Optimizer In UseChanged Behavior with GPORCAGPORCA LimitationsDefining QueriesUsing Functions and OperatorsQuery PerformanceQuery ProfilingBest PracticesBest Practices for Configuring HAWQ ParametersBest Practices for Operating HAWQBest Practices for Securing HAWQBest Practices for Managing ResourcesBest Practices for Managing DataBest Practices for Querying DataTroubleshootingHAWQ ReferenceSQL CommandsABORTALTER AGGREGATEALTER DATABASEALTER CONVERSIONALTER FUNCTIONALTER OPERATORALTER OPERATOR CLASSALTER RESOURCE QUEUEALTER ROLEALTER SEQUENCEALTER TABLEALTER TABLESPACEALTER TYPEALTER USERANALYZEBEGINCHECKPOINTCLOSECOMMITCOPYCREATE AGGREGATE
© Copyright Pivotal Software Inc, 2013-2017 4 2.3.0
607609611613619623624626630633637641643646654657659663664666667669670671672674675676678679680682684686688689690691692693694695696697698701704708710712714715716718719
CREATE CASTCREATE CONVERSIONCREATE DATABASECREATE EXTERNAL TABLECREATE FUNCTIONCREATE GROUPCREATE LANGUAGECREATE OPERATORCREATE OPERATOR CLASSCREATE RESOURCE QUEUECREATE ROLECREATE SCHEMACREATE SEQUENCECREATE TABLECREATE TABLE ASCREATE TABLESPACECREATE TYPECREATE USERCREATE VIEWDEALLOCATEDECLAREDROP AGGREGATEDROP CASTDROP CONVERSIONDROP DATABASEDROP EXTERNAL TABLEDROP FILESPACEDROP FUNCTIONDROP GROUPDROP LANGUAGEDROP OPERATORDROP OPERATOR CLASSDROP OWNEDDROP RESOURCE QUEUEDROP ROLEDROP SCHEMADROP SEQUENCEDROP TABLEDROP TABLESPACEDROP TYPEDROP USERDROP VIEWENDEXECUTEEXPLAINFETCHGRANTINSERTPREPAREREASSIGN OWNEDRELEASE SAVEPOINTRESETREVOKEROLLBACKROLLBACK TO SAVEPOINT
© Copyright Pivotal Software Inc, 2013-2017 5 2.3.0
721723733734736738740741742744745747756806808812814818818822823825826828829830831832833834835836837838839840841842843844845846847848849850851853854855856857858859860861
SAVEPOINTSELECTSELECT INTOSETSET ROLESET SESSION AUTHORIZATIONSHOWTRUNCATEVACUUMServer Configuration Parameter ReferenceAbout Server Configuration ParametersConfiguration Parameter CategoriesConfiguration ParametersSample hawq-site.xml Configuration FileHDFS Configuration ReferenceEnvironment VariablesCharacter Set Support ReferenceData TypesTable 1. HAWQ Built-in Data TypesSystem Catalog ReferenceSystem TablesSystem ViewsSystem Catalogs Definitionsgp_configuration_historygp_distribution_policygp_global_sequencegp_master_mirroringgp_persistent_database_nodegp_persistent_filespace_nodegp_persistent_relation_nodegp_persistent_relfile_nodegp_persistent_tablespace_nodegp_relfile_nodegp_segment_configurationgp_version_at_initdbpg_aggregatepg_ampg_amoppg_amprocpg_appendonlypg_attrdefpg_attributepg_attribute_encodingpg_auth_memberspg_authidpg_castpg_classpg_compressionpg_constraintpg_conversionpg_databasepg_dependpg_descriptionpg_exttablepg_filespacepg_filespace_entry
© Copyright Pivotal Software Inc, 2013-2017 6 2.3.0
862863864865866867868869870871872873874875876877878880881882883884885886887888889890891892893894895898899900905906909911914916918921924926929932934940943947955960962
pg_indexpg_inheritspg_languagepg_largeobjectpg_listenerpg_lockspg_namespacepg_opclasspg_operatorpg_partitionpg_partition_columnspg_partition_encodingpg_partition_rulepg_partition_templatespg_partitionspg_pltemplatepg_procpg_resqueuepg_resqueue_statuspg_rewritepg_rolespg_shdependpg_shdescriptionpg_stat_activitypg_stat_last_operationpg_stat_last_shoperationpg_stat_operationspg_stat_partition_operationspg_statisticpg_statspg_tablespacepg_triggerpg_typepg_type_encodingpg_windowThe hawq_toolkit Administrative SchemaHAWQ Management Tools Referenceanalyzedbcreatedbcreateuserdropdbdropusergpfdistgplogfilterhawq activatehawq checkhawq checkperfhawq confighawq extracthawq filespacehawq inithawq loadhawq registerhawq restarthawq scp
© Copyright Pivotal Software Inc, 2013-2017 7 2.3.0
964966968971973975980984989
1004
hawq sshhawq ssh-exkeyshawq starthawq statehawq stoppg_dumppg_dumpallpg_restorepsqlvacuumdb
© Copyright Pivotal Software Inc, 2013-2017 8 2.3.0
PivotalHDB2.3.0DocumentationPublished:July25,2018
Thisdocumentationdescribeshowtoinstall,configureandusethenewfeaturesofPivotalHDB2.3,whichincorporatesfeaturesfromApacheHAWQ®(incubating) .
KeytopicsinthePivotalHDB2.3.0documentationinclude:
ReleaseNotesandSystemRequirements
InstallingandupgradingPivotalHDB
SystemOverview
GettingStartedTutorial
ManagingDatawithHAWQ
UsingPXFwithUnmanagedData
UsingProceduralLanguages
BestPractices
Troubleshooting
Reference
Inaddition,specializedsubtopicsinclude:
ManagingHAWQUsingAmbari
AboutGPORCA
IntegratingYARNwithHAWQ
PXFJavaAPIReference
© Copyright Pivotal Software Inc, 2013-2017 9 2.3.0
http://hawq.incubator.apache.org/http://hawq.incubator.apache.org/docs/pxf/javadoc/
PivotalHDB2.3.0ReleaseNotesPivotalHDB2.3.0isaminorreleaseoftheproductandisbasedonApacheHAWQ®(Incubating) .ThisreleaseincludesHAWQRangerHighAvailabilityandKerberossupport,BetasupportforthePXFHiveVectorizedORCandJDBCprofiles,andbugfixes.
SupportedPlatformsThesupportedplatformforrunningPivotalHDB2.3.0comprises:
RedHatEnterpriseLinux(RHEL)6.4+,7.2+(64-bit)(SeenoteinKnownIssuesandLimitationsforkernellimitations.)
HortonworksDataPlatform(HDP)2.5.3 .
Ambari2.4.2 (forAmbari-basedinstallationandHAWQclustermanagement).
EachPivotalHDBhostmachinemustalsomeettheApacheHAWQ(Incubating)systemrequirements.SeeApacheHAWQSystemRequirementsformoreinformation.
ProductSupportMatrixThefollowingtablesummarizesPivotalHDBproductsupportforcurrentandpreviousversionsofPivotalHDB,Hadoop,HAWQ,Ambari,andoperatingsystems.
PivotalHDBVersion
PXFVersion
HDPVersionRequirement
AmbariVersionRequirement
HAWQAmbariPlug-inRequirement
MADlibVersionRequirement
RHEL/CentOSVersionRequirement
SuSEVersionRequirement
2.3.0.0 3.3.0.0 2.5.3,2.6.1 2.4.2,2.5.1 2.3.0.0 1.10,1.11 6.4+,7.2+(64-bit) n/a
2.2.0.0 3.2.1.0 2.5.3,2.6.1 2.4.2,2.5.1 2.2.0.0 1.9,1.9.1,1.10 6.4+,7.2+(64-bit) n/a
2.1.2.0 3.2.0.0 2.5 2.4.1,2.4.2 2.1.2.0 1.9,1.9.1,1.10 6.4+(64-bit) n/a
2.1.1.0 3.1.1.0 2.5 2.4.1 2.1.1.0 1.9,1.9.1 6.4+(64-bit) n/a
2.1.0.0 3.1.0.0 2.5 2.4.1 2.1.0.0 1.9,1.9.1 6.4+(64-bit) n/a
2.0.1.0 3.0.1 2.4.0,2.4.2 2.2.2,2.4 2.0.1 1.9,1.9.1 6.4+(64-bit) n/a
2.0.0.0 3.0.0 2.3.4,2.4.0 2.2.2 2.0.0 1.9,1.9.1 6.4+(64-bit) n/a
1.3.1.1 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.1.0 2.5.1.1 2.2.6 2.0.x 1.3.1 1.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.3 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.2 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.1 1.3.3 2.2.4.2 1.7 1.11.7.1,1.8,1.9,1.9.1
6.4+ n/a
1.3.0.0 1.3.3 n/a n/a n/a1.7.1,1.8,1.9,1.9.1
n/a n/a
ProceduralLanguageSupportMatrixThefollowingtablesummarizescomponentversionsupportforProceduralLanguagesavailableinPivotalHDB2.x.TheversionslistedhavebeentestedwithPivotalHDB.Higherversionsmaybecompatible.Pleasetesthigherversionsthoroughlyinyournon-productionenvironmentsbeforedeployingtoproduction.
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
2.3.0.0 1.7 3.3.1 5.10.1 2.6.2
2.2.0.0 1.7 3.3.1 5.10.1 2.6.2
© Copyright Pivotal Software Inc, 2013-2017 10 2.3.0
http://hawq.incubator.apache.org/https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/ch_relnotes_v253.htmlhttp://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-release-notes/content/ch_relnotes-ambari-2.4.2.0.html
2.1.x.0 1.7 3.3.1 5.10.1 2.6.2
2.0.1.0 1.7 3.3.1 5.10.1 2.6.2
2.0.0.0 1.6,1.7 3.1.0 5.10.1 2.6.2
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
AWSSupportRequirementsPivotalHDBissupportedonAmazonWebServices(AWS)serversusingeitherAmazonblocklevelInstancestore(Amazonusesthevolumenamesephemeral[0-23])orAmazonElasticBlockStore(AmazonEBS)storage.Uselong-runningEC2instanceswiththeseforlong-runningHAWQinstances,asSpotinstancescanbeinterrupted.IfusingSpotinstances,minimizeriskofdatalossbyloadingfromandexportingtoexternalstorage.SeeAmazonEC2ConfigurationforadditionalHAWQAWS-relateddeploymentconsiderations.
PivotalHDB2.3.0FeaturesandChangesPivotalHDB2.3.0isbasedonApacheHAWQ(Incubating) ,andincludesthefollowingnewfeaturesascomparedtoPivotalHDB2.2.0:
HAWQRangerHighAvailabilityHDB2.3allowsyoutoconfigureastandbyHAWQRangerPluginService(RPS)forfailoverincaseswheretheprimaryRPSisunavailableornotrespondingonthelocalHAWQMasternode.ConfiguringastandbyRPSensuresthatHAWQcanauthorizeincomingrequestsatalltimeswithoutanydowntime.RefertotheHighAvailabilityandHAWQRangerdocumentationforadditionalinformation.
HAWQRangerKerberosSupportHDB2.3supportsHAWQRangerintegrationwhenKerberosisenabledforRangerand/orKerberosisenabledforHAWQuserauthentication.SeetheHAWQRangerKerberosIntegrationdocumentationfortheinformationnecessarytoconfigureKerberossupportforyourRanger-authorizedHAWQcluster.
PXFHiveVectorizedORCprofileHDB2.3includesaBetareleaseofOptimizedRowColumnar(ORC)fileformatwithVectorizedBatchReadersupport.RefertothePXFHiveplug-inORCFileFormatdocumentationforspecificinformationrelatedtothisnewfeature.
PXFJDBCplug-inHDB2.3includesaBetareleaseofthePXFJDBCplug-in.Withthisplug-in,youcancreateandqueryaHAWQexternaltablerepresentinganexternalSQLdatastoreusingaJDBCconnection.RefertotheAccessingExternalSQLDatabaseswithJDBC(Beta)documentationforspecificinformationrelatedtothePXFJDBCplug-in.
GettingStartedwithHAWQTutorialHDB2.3includestheGettingStartedwithHAWQTutorial.ThisguideprovidesaquickintroductionandvariousexercisestogetyouupandrunningwithyourHAWQinstallation.YouwilllearnabouttheHAWQruntimeenvironment,includingclusteranddatabaseadminstrationbasics.YouwillalsouseHAWQandPXFtoaccesssamplemanagedandunmanageddata.
PivotalHDB2.3.0UpgradeTheUpgradingfromPivotalHDB2.1.xor2.2guideprovidesspecificdetailsonapplyingthePivotalHDB2.3.0minorreleasetoyourHDB2.1.xor2.2.0installation.
Note:DirectupgradesfromHDB2.0.xarenotsupported.ThereisalsonodirectupgradepathfromPivotalHDB1.xtoHDB2.3.0.FormoreinformationonconsiderationsforupgradingfromPivotal1.xreleasestoPivotal2.xreleases,refertothePivotalHDB2.0documentation .ContactyourPivotalrepresentativeforassistanceinmigratingfromHDB1.xtoHDB2.x.
DifferencesComparedtoApacheHAWQ(Incubating)PivotalHDB2.3.0includesallofthefunctionalityinApacheHAWQ(Incubating) .
ResolvedIssuesThefollowingHAWQandPXFissueswereresolvedinPivotalHDB2.3.0.
ApacheJira Component Summary
© Copyright Pivotal Software Inc, 2013-2017 11 2.3.0
http://hawq.incubator.apache.org/http://hdb.docs.pivotal.io/200/hdb/releasenotes/HAWQ20ReleaseNotes.html#upgradepathshttp://hawq.incubator.apache.org/
HAWQ-1404
PXF PXFnowleveragesthefile-levelstatisticsofanORCfile,andemitsrecordsfor COUNT(*)
HAWQ-1409
PXF HAWQsendsanadditionalheadertoPXFtoindicatetheaggregatefunctiontype
HAWQ-1417
Core Fixedacrashthatcouldoccurwhenexecuting ANALYZE after COPY
HAWQ-1422
Security HAWQnowprovidesuser-groupmembershipintheRangerPlug-inServicewhenrequestingaccess
HAWQ-1425
CommandLineTools
Fixedanincorrectinitclustererrormessagethatoccurredwhenansshconnectionfailed
HAWQ-1426
CommandLineTools
Fixedaproblemwhere hawq extract returnedanerrorafterthetablewasreorganized
HAWQ-1427
PXF ResolvedaPXFJSONProfile lang3 dependencyerror
HAWQ-1429
PXF PXFnolongerusesAggBridgewhena WHERE clauseisspecified
HAWQ-1430
Security UpdatedtheRanger-relatedlogleveltoavoidlogflooding
HAWQ-1431
PXF PXFnolongeruses StatsAccessor whenacolumnisspecifiedina SELECT clause
HAWQ-1433
ResourceManager
ALTER RESOURCE QUEUE DDLnowcheckstheformatoftheattributesMEMORY_CLUSTER_LIMITandCORE_CLUSTER_LIMIT
HAWQ-1434
PXF RemovedforceduppercasingofthetablenameinthePXFJDBCplug-in
HAWQ-1436
Security ImplementedRangerPlug-inServiceHighAvailabilityonHAWQ
HAWQ-1438
Core HAWQnowsupportsaresourceownerbeyondthetransactionboundary
HAWQ-1439
ResourceManager HAWQnowtoleratesthesystemtimebeingchangedtoanearlierpointwhencheckingtheresourcecontexttimeout
HAWQ-1440
PXF ANALYZE issupportedforHiveExternalTables
HAWQ-1443
Security ImplementedRangerlookupforHAWQwhenKerberosisenabled
HAWQ-1446
PXF IntroducedtheVectorizedprofileforORC
HAWQ-1449
CommandLineTools
HAWQstart/stopclusternowstarts/stopstheRangerPluginServiceonstandbynode
HAWQ-1451
CommandLineTools
The hawq state commandreportsthestatusofboththeRangerPluginServiceandtheStandbyRangerPluginService
HAWQ-1452
SecurityRemovedhawq_rps_address_suffixandhawq_rps_address_hostin hawq-site.xml tosimplifyconfigurationforRangerPlug-inServiceHighAvailability
HAWQ-1453
CoreFixedaproblemthatcausedtheerror:relation_close()reporterroratanalyzeStmt():isnotownedbyresourceownerTopTransaction(resowner.c:814)
HAWQ-1455
Core FixedincorrectresultsonCTASqueryovercatalog
HAWQ-1456
Security HAWQcopiesRangerPluginServiceconfigurationfilestothestandbyinspecificscenarios
HAWQ-1457
Core SharedmemoryforSegmentStatusandMetadataCacheisnolongerallocatedonsegments
HAWQ-1460
Core TheWALSendServerprocessnowexitsifpostmasteronmasteriskilled
HAWQ-1461
PXF ImprovedpartitionparametervalidationforthePXF-JDBCplugin
HAWQ-Security Improved enable-ranger-plugin.sh tosupportKerberos
ApacheJira Component Summary
© Copyright Pivotal Software Inc, 2013-2017 12 2.3.0
https://issues.apache.org/jira/browse/HAWQ-1404https://issues.apache.org/jira/browse/HAWQ-1409https://issues.apache.org/jira/browse/HAWQ-1417https://issues.apache.org/jira/browse/HAWQ-1422https://issues.apache.org/jira/browse/HAWQ-1425https://issues.apache.org/jira/browse/HAWQ-1426https://issues.apache.org/jira/browse/HAWQ-1427https://issues.apache.org/jira/browse/HAWQ-1429https://issues.apache.org/jira/browse/HAWQ-1430https://issues.apache.org/jira/browse/HAWQ-1431https://issues.apache.org/jira/browse/HAWQ-1433https://issues.apache.org/jira/browse/HAWQ-1434https://issues.apache.org/jira/browse/HAWQ-1436https://issues.apache.org/jira/browse/HAWQ-1438https://issues.apache.org/jira/browse/HAWQ-1439https://issues.apache.org/jira/browse/HAWQ-1440https://issues.apache.org/jira/browse/HAWQ-1443https://issues.apache.org/jira/browse/HAWQ-1446https://issues.apache.org/jira/browse/HAWQ-1449https://issues.apache.org/jira/browse/HAWQ-1451https://issues.apache.org/jira/browse/HAWQ-1452https://issues.apache.org/jira/browse/HAWQ-1453https://issues.apache.org/jira/browse/HAWQ-1455https://issues.apache.org/jira/browse/HAWQ-1456https://issues.apache.org/jira/browse/HAWQ-1457https://issues.apache.org/jira/browse/HAWQ-1460https://issues.apache.org/jira/browse/HAWQ-1461https://issues.apache.org/jira/browse/HAWQ-1476
1476 HAWQ-1477
Security RangerPluginServiceconnectstoRangeradminunderkerberossecurity
HAWQ-1480
CommandLineTools
PackingacorefileinHAWQisnowpossible
HAWQ-1485
Security Theuser/passwordisusedinsteadofthecredentialscacheinRangerlookupforHAWQwhenKerberosisenabled
HAWQ-1486
PXF ResolvedaPANICthatoccurredwhileaccessingaPXFHDFStable
HAWQ-1487
Core Fixedadeadlockthatoccurredwhentryingtoprocessaninterruptinerrorhandling
HAWQ-1492
PXF Packagedthejdbc-pluginRPMwithPXFinstallation
HAWQ-1493
Security IntegratedtheRangerlookupJAASconfigurationintheranger-adminpluginJAR
ApacheJira Component Summary
KnownIssuesandLimitations
OperatingSystemHDBinstallationsrunningRHEL-7orCentOS7versionspriortoversion7.3mayexperienceanoperatingsystemissuethatcouldcauseHDBtohangwithlargeworkloads.RHEL7.3andCentOS7.3resolvethisissue.
RangerIntegrationwithHAWQRefertoLimitationsofRangerPolicyManagementforadiscussionoflimitationsrelatedtoHAWQintegrationwithRangerauthorization.
PXFGPSQL-3345-Totakeadvantageofthechangeinnumberofvirtualsegments,PXFexternaltablesmustbedroppedandrecreatedafterupdatingthedefault_hash_table_bucket_number serverconfigurationparameter.
GPSQL-3347-The LOCATION stringprovidedwhencreatingaPXFexternaltablemustuseonlyASCIIcharacterstoidentifyafilepath.Specifyingdouble-byteormulti-bytecharactersinafilepathreturnsthefollowingerror(formattedforclarity):
ERROR: remote component error (500) from 'IP_Address:51200': type Exception report message: File does not exist: /tmp/??????/ABC-??????-001.csv description: The server encountered an internal error that prevented it from fulfilling this request. exception: java.io.IOException: File does not exist: /tmp/??????/ABC-??????-001.csv (libchurl.c:897) (seg10 hdw2.hdp.local:40000 pid=389911) (dispatcher.c:1801)
PXFinaKerberos-securedclusterrequiresYARNtobeinstalledduetoadependencyonYARNlibraries.
InorderforPXFtointeroperatewithHBase,youmustmanuallyaddthePXFHBaseJARfiletotheHBaseclasspathafterinstallation.SeePost-InstallProcedureforHiveandHBaseonHDP.
HAWQ-974 -WhenusingcertainPXFprofilestoqueryagainstlargerfilesstoredinHDFS,usersmayoccasionallyexperiencehangingorquerytimeout.ThisisaknownissuethatwillbeimprovedinafutureHDBrelease.RefertoAddressingPXFMemoryIssuesforadiscussionoftheconfigurationoptionsavailabletoaddresstheseissuesinyourPXFdeployment.
The HiveORC profilesupportsaggregatequeries(count,min,max,etc.),buttheyhavenotyetbeenoptimizedtoleverageORCfile-andstripe-levelmetadata.
The HiveVectorizedORC profiledoesnotsupportthetimestampdatatypeorcomplextypes.
AmbariAmbari-managedclustersshouldonlyuseAmbariforsettingserverconfigurationparameters.Parametersmodifiedusingthe hawq configcommandwillbeoverwrittenonAmbaristartuporreconfiguration.
WheninstallingHAWQinaKerberos-securedcluster,theinstallationprocessmayreportawarning/failureinAmbariiftheHAWQconfigurationfor
© Copyright Pivotal Software Inc, 2013-2017 13 2.3.0
https://issues.apache.org/jira/browse/HAWQ-1477https://issues.apache.org/jira/browse/HAWQ-1480https://issues.apache.org/jira/browse/HAWQ-1485https://issues.apache.org/jira/browse/HAWQ-1486https://issues.apache.org/jira/browse/HAWQ-1487https://issues.apache.org/jira/browse/HAWQ-1492https://issues.apache.org/jira/browse/HAWQ-1493https://issues.apache.org/jira/browse/HAWQ-974
resourcemanagementtypeisswitchedtoYARNmodeduringinstallation.ThewarningisrelatedtoHAWQnotbeingabletoregisterwithYARNuntiltheHDFS&YARNservicesarerestartedwithnewconfigurationsresultingfromtheHAWQinstallationprocess.
TheHAWQstandbymasterwillnotworkafteryouchangetheHAWQmasterportnumber.Toenablethestandbymasteryoumustfirstremoveandthenre-initializeit.SeeRemovingtheHAWQStandbyMasterandActivatingtheHAWQStandbyMaster.
TheAmbariRe-SynchronizeHAWQStandbyMasterserviceactionfailsifthereisanactiveconnectiontotheHAWQmasternode.TheHAWQtaskoutputshowstheerror, Active connections. Aborting shutdown... Ifthisoccurs,closeallactiveconnectionsandthentrythere-synchronizeactionagain.
TheAmbariRunServiceCheckactionforHAWQandPXFmaynotworkproperlyonasecureclusterifPXFisnotco-locatedwiththeYARNcomponent.
Inasecuredcluster,ifyoumovetheYARNResourceManagertoanotherhostyoumustmanuallyupdate hadoop.proxyuser.yarn.hosts intheHDFScore-site.xml filetomatchthenewResourceManagerhostname.Ifyoudonotperformthisstep,HAWQsegmentsfailtogetresourcesfromtheResourceManager.
TheAmbariStopHAWQServer(ImmediateMode)serviceactionor hawq stop -M immediate commandmaynotstopallHAWQmasterprocessesinsomecases.Several postgres processesownedbythe gpadmin usermayremainactive.
Ambaricheckswhetherthe hawq_rm_yarn_address and hawq_rm_yarn_scheduler_address valuesarevalidwhenYARNHAisnotenabled.InclustersthatuseYARNHA,thesepropertiesarenotusedandmaygetout-of-syncwiththeactiveResourceManager.ThiscanleadingtofalsewarningsfromAmbariifyoutrytochangethepropertyvalue.
AmbaridoesnotsupportCustomConfigurationGroupswithHAWQ.
CertainHAWQserverconfigurationparametersrelatedtoresourceenforcementarenotactive.ModifyingtheparametershasnoeffectinHAWQsincetheresourceenforcementfeatureisnotcurrentlysupported.Theseparametersinclude hawq_re_cgroup_hierarchy_name ,hawq_re_cgroup_mount_point ,and hawq_re_cpu_enable .TheseparametersappearintheAdvancedhawq-siteconfigurationsectionoftheAmbarimanagementinterface.
WorkaroundRequiredafterMovingNamenode
IfyouusetheAmbariMoveNamenodeWizardtomoveaHadoopnamenode,theWizarddoesnotautomaticallyupdatetheHAWQconfigurationtoreflectthechange.ThisleavesHAWQinannon-functionalstate,andwillcauseHAWQservicecheckstofailwithanerrorsimilarto:
2017-04-19 21:22:59,138 - SQL command executed failed: export PGPORT=5432 && source/usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\\\"CREATE TABLEambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\\\"Returncode: 1Stdout:Stderr: Warning: Permanently added 'ip-10-32-36-168.ore1.vpc.pivotal.io,10.32.36.168'(RSA) to the list of known hosts.WARNING: could not remove relation directory 16385/1/18366: Input/output errorCONTEXT: Dropping file-system object -- Relation Directory: '16385/1/18366'ERROR: could not create relation directoryhdfs://ip-10-32-36-168.ore1.vpc.pivotal.io:8020/hawq_default/16385/1/18366: Input/output error
2016-04-19 21:22:59,139 - SERVICE CHECK FAILED: HAWQ was not able to write and queryfrom a table2016-04-19 21:23:02,608 - ** FAILURE **: Service check failed 1 of 3 checksstdout: /var/lib/ambari-agent/data/output-281.txt
Toworkaroundthisproblem,performoneofthefollowingproceduresafteryoucompletetheMoveNamenodeWizard.
WorkaroundforNon-HANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UsetheAmbari config.sh utilitytoupdate hawq_dfs_url tothenewNameNodeaddress.SeetheModifyconfigurations ontheAmbariWikiformoreinformation.Forexample:
$ cd /var/lib/ambari-server/resources/scripts/$ ./configs.sh set {ambari_server_host} {clustername} hawq-site$ hawq_dfs_url {new_namenode_address}:{port}/hawq_default
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. Use ssh tologintoaHAWQnodeandrunthe checkpoint command:
$ psql -d template1 -c "checkpoint"
© Copyright Pivotal Software Inc, 2013-2017 14 2.3.0
https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
5. StoptheHAWQservice.
6. Themasterdatadirectoryisidentifiedinthe $GPHOME/etc/hawq-site.xml file hawq_master_directory propertyvalue.Copythemasterdatadirectorytoabackuplocation:
$ export MDATA_DIR=/value/from/hawqsite$ cp -r $MDATA_DIR /catalog/backup/location
7. ExecutethisquerytodisplayallavailableHAWQfilespaces:
SELECT fsname, fsedbid, fselocation FROM pg_filespace AS sp,pg_filespace_entry AS entry, pg_filesystem AS fs WHERE sp.fsfsys = fs.oidAND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid ORDER BYentry.fsedbid;
fsname | fsedbid | fselocation-------------+---------+------------------------------------------------cdbfast_fs_a | 0 | hdfs://hdfs-cluster/hawq//cdbfast_fs_adfs_system | 0 | hdfs://test5:9000/hawq/hawq-1459499690(2 rows)
8. Executethe hawq filespace commandoneachfilespacethatwasreturnedbythepreviousquery.Forexample:
$ hawq filespace --movefilespace dfs_system --location=hdfs://new_namenode:port/hawq/hawq-1459499690$ hawq filespace --movefilespace cdbfast_fs_a --location=hdfs://new_namenode:port/hawq//cdbfast_fs_a
9. IfyourclusterusesaHAWQstandbymaster,reinitializethestandbymasterinAmbariusingtheRemoveStandbyWizardfollowedbytheAddStandbyWizard.
10. StarttheHAWQService.
11. RunaHAWQservicechecktoensurethatalltestspass.
WorkaroundforHANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UseAmbaritoexpand Custom hdfs-client intheHAWQConfigstab,thenupdatethe dfs.namenode. propertiestomatchthecurrentNameNodeconfiguration.
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. RunaHAWQservicechecktoensurethatalltestspass.
© Copyright Pivotal Software Inc, 2013-2017 15 2.3.0
PivotalHDB2.2.0ReleaseNotesPivotalHDB2.2.0isaminorreleaseoftheproductandisbasedonApacheHAWQ(Incubating) .ThisreleaseincludesRHEL/CentOS7support,BetasupportforApacheRangerintegration,andbugfixes.
SupportedPlatformsThesupportedplatformforrunningPivotalHDB2.2.0comprises:
RedHatEnterpriseLinux(RHEL)6.4+,7.2+(64-bit)(SeenoteinKnownIssuesandLimitationsforkernellimitations.)
CentOS7
HortonworksDataPlatform(HDP)2.5.3 .
Ambari2.4.2 (forAmbari-basedinstallationandHAWQclustermanagement).
EachPivotalHDBhostmachinemustalsomeettheApacheHAWQ(Incubating)systemrequirements.SeeApacheHAWQSystemRequirements formoreinformation.
ProductSupportMatrixThefollowingtablesummarizesPivotalHDBproductsupportforcurrentandpreviousversionsofPivotalHDB,Hadoop,HAWQ,Ambari,andoperatingsystems.
PivotalHDBVersion
PXFVersion
HDPVersionRequirement
AmbariVersionRequirement
HAWQAmbariPlug-inRequirement
MADlibVersionRequirement
RHEL/CentOSVersionRequirement
SuSEVersionRequirement
2.2.0.0 3.2.1.0 2.5.3,2.6.1 2.4.2,2.5.1 2.2.0.0 1.9,1.9.1,1.10 6.4+,7.2+(64-bit) n/a
2.1.2.0 3.2.0.0 2.5 2.4.1,2.4.2 2.1.2.0 1.9,1.9.1,1.10 6.4+(64-bit) n/a
2.1.1.0 3.1.1.0 2.5 2.4.1 2.1.1.0 1.9,1.9.1 6.4+(64-bit) n/a
2.1.0.0 3.1.0.0 2.5 2.4.1 2.1.0.0 1.9,1.9.1 6.4+(64-bit) n/a
2.0.1.0 3.0.1 2.4.0,2.4.2 2.2.2,2.4 2.0.1 1.9,1.9.1 6.4+(64-bit) n/a
2.0.0.0 3.0.0 2.3.4,2.4.0 2.2.2 2.0.0 1.9,1.9.1 6.4+(64-bit) n/a
1.3.1.1 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.1.0 2.5.1.1 2.2.6 2.0.x 1.3.1 1.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.3 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.2 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.1 1.3.3 2.2.4.2 1.7 1.11.7.1,1.8,1.9,1.9.1
6.4+ n/a
1.3.0.0 1.3.3 n/a n/a n/a1.7.1,1.8,1.9,1.9.1
n/a n/a
ProceduralLanguageSupportMatrixThefollowingtablesummarizescomponentversionsupportforProceduralLanguagesavailableinPivotalHDB2.x.TheversionslistedhavebeentestedwithPivotalHDB.Higherversionsmaybecompatible.Pleasetesthigherversionsthoroughlyinyournon-productionenvironmentsbeforedeployingtoproduction.
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
2.2.0.0 1.7 3.3.1 5.10.1 2.6.2
2.1.x.0 1.7 3.3.1 5.10.1 2.6.2
© Copyright Pivotal Software Inc, 2013-2017 16 2.3.0
http://hawq.incubator.apache.org/https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/ch_relnotes_v253.htmlhttp://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-release-notes/content/ch_relnotes-ambari-2.4.2.0.htmlhttps://hdb.docs.pivotal.io/220/hawq/requirements/system-requirements.html
2.0.1.0 1.7 3.3.1 5.10.1 2.6.2
2.0.0.0 1.6,1.7 3.1.0 5.10.1 2.6.2
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
AWSSupportRequirementsPivotalHDBissupportedonAmazonWebServices(AWS)serversusingeitherAmazonblocklevelInstancestore(Amazonusesthevolumenamesephemeral[0-23])orAmazonElasticBlockStore(AmazonEBS)storage.Uselong-runningEC2instanceswiththeseforlong-runningHAWQinstances,asSpotinstancescanbeinterrupted.IfusingSpotinstances,minimizeriskofdatalossbyloadingfromandexportingtoexternalstorage.
PivotalHDB2.2.0FeaturesandChangesPivotalHDB2.2.0isbasedonApacheHAWQ(Incubating) ,andincludesthefollowingnewfeaturesascomparedtoPivotalHDB2.1.2:
RangerIntegration(Beta)PivotalHDB2.2.0introducesBetasupportforApacheRanger.TheHAWQRangerPluginServiceisaRESTfulservicethatprovidesintegrationbetweenHDBandRangerpolicymanagement.RangerintegrationenablesyoutouseApacheRangertoauthorizeuseraccesstoHAWQresources.UsingRangerenablesyoutomanageallofyourHadoopcomponents’authorizationpoliciesusingthesameuserinterface,policystore,andauditingstores.RefertotheOverviewofRangerPolicyManagement forspecificinformationonHAWQintegrationwithRanger.
RHEL/CentOS7.2+supportPivotalHDB2.2.0nowprovidesproductdownloadsforRedHatEnterpriseLinux7andCentOS7operatingsystems.
PXFORCwithPivotalHDBPivotalHDB2.2.0nowfullysupportsPXFwithOptimizedRowColumnar(ORC)fileformat,whichwasformerlyBETA.
PivotalHDB2.2.0UpgradePivotalHDB2.2.0upgradepaths:
TheUpgradingfromPivotalHDB2.1.x guideprovidesspecificdetailsonapplyingthePivotalHDB2.2.0minorreleasetoyourHDB2.1.0,2.1.1,or2.1.2installation.
TheUpgradingfromPivotalHDB2.0.1 guidedetailsthestepsinvolvedtoupgradeyourPivotalHDB2.0.1installationtoHDB2.2.0.
Note:ThereisnodirectupgradepathfromPivotalHDB1.xtoHDB2.2.0.FormoreinformationonconsiderationsforupgradingfromPivotal1.xreleasestoPivotal2.xreleases,refertothePivotalHDB2.0documentation .ContactyourPivotalrepresentativeforassistanceinmigratingfromHDB1.xtoHDB2.x.
DifferencesComparedtoApacheHAWQ(Incubating)PivotalHDB2.2.0doesnotcurrentlysupportthePXFJDBCPlug-in.
Otherwise,PivotalHDB2.2.0includesallofthefunctionalityinApacheHAWQ(Incubating) .
ResolvedIssuesThefollowingHAWQandPXFissueswereresolvedinPivotalHDB2.2.0.
ApacheJira Component Summary
HAWQ-256 Security IntegratedSecuritywithApacheRanger(Partialbeta-releaseimplementation)
HAWQ-309 Build SupportforCentOS/RHEL7
HAWQ-944 Core Numutils.c: pg_ltoa and pg_itoa functionsallocatedunnecessaryamountsofbytes
HAWQ-1063 CommandLineTools FixedHAWQPythonlibrarymissingimport
HAWQ-1314 Catalog,Hcatalog,PXF Thepxf_get_item_fieldsfunctionwouldstopworkingafterupgrade
HAWQ-1347 Dispatcher QDshouldonlychecksegmenthealth
© Copyright Pivotal Software Inc, 2013-2017 17 2.3.0
http://hawq.incubator.apache.org/https://hdb.docs.pivotal.io/220/hawq/ranger/ranger-overview.htmlhttps://hdb.docs.pivotal.io/220/hdb/install/HDB21xto22xUpgrade.htmlhttps://hdb.docs.pivotal.io/220/hdb/install/HDB20xto22xUpgrade.htmlhttp://hdb.docs.pivotal.io/200/hdb/releasenotes/HAWQ20ReleaseNotes.html#upgradepathshttp://hawq.incubator.apache.org/https://issues.apache.org/jira/browse/HAWQ-256https://issues.apache.org/jira/browse/HAWQ-309https://issues.apache.org/jira/browse/HAWQ-944https://issues.apache.org/jira/browse/HAWQ-1063https://issues.apache.org/jira/browse/HAWQ-1314https://issues.apache.org/jira/browse/HAWQ-1347
HAWQ-1365 Core Printoutdetailedschemainformationfortables,eveniftheuserdoesn’thaveprivilegesaccessprivilegesHAWQ-1366 Storage HAWQshouldthrowanerrorifitfindsadictionaryencodingtypeforParquet
HAWQ-1371 QueryExecution QEprocesshanginsharedinputscan
HAWQ-1378 Core Elaboratethe“invalidcommand-lineargumentsforserverprocess”error
HAWQ-1379 Core Donotsendoptionsmultipletimesinbuild_startup_packet
HAWQ-1385 CommandLineTools hawq_ctl stop failswhenthemasterisdown
HAWQ-1408 Core COPY…FROMSTDINcausesPANIC
HAWQ-1418 CommandLineTools Printexecutingcommandforhawqregister
ApacheJira Component Summary
KnownIssuesandLimitations
MADlib1.9.xCompressionPivotalHDB2.2.0iscompatiblewithMADlib1.9,1.9.1,and1.10.IfyouhaveanexistingHDBinstallationwithMADlib1.9.xinstalled,orareinstallingMADlib1.9.x,youmustdownloadandexecuteascripttoremoveMADlib’suseofQuicklzcompression,whichisnotsupportedinHDB2.2.0.RunthisscriptifyouareupgradinganHDBinstallationwithMADlib1.9.xtoHDB2.2.0,orifyouareinstallingMADlib1.9.xonHDB2.2.0.
ThisprocedureisnotnecessaryifyouareusingorinstallingMADlib1.10,orifyouhavepreviouslydisabledQuicklzcompression.
IfyouareupgradinganHDB2.0systemthatcontainsMADlib:
1. CompletethePivotalHDB2.2.0upgradeprocedureasdescribedinUpgradingtoPivotalHDB2.2.0 .
2. DownloadandunpacktheMADlib1.9.xbinarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
3. Executethe remove_compression.sh scriptintheMADlib1.9.xdistribution,providingthepathtoyourexistingMADlibinstallation:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
FornewMADlibinstallations,completethesestepsafteryouinstallPivotalHDB2.2.0:
1. DownloadandunpacktheMADlib1.9.xbinarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
2. InstalltheMADlib .gppkg file:
$ gppkg -i /madlib-ossv1.9.1_pv1.9.6_hawq2.1-rhel5-x86_64.gppkg
3. Executethe remove_compression.sh script,optionallyprovidingtheMADlibinstallationpath:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
4. ContinueinstallingMADlibusingthe madpack install commandasdescribedintheMADlibInstallationGuide .Forexample:
$ madpack –p hawq install
OperatingSystemSomeLinuxkernelversionsbetween2.6.32to4.3.3(notincluding2.6.32and4.3.3)haveabugthatcouldintroducea getaddrinfo() functionhang.Toavoidthisissue,upgradeyourRHEL-6kerneltoversion4.3.3+.
IfyouarerunningRHEL-7,ensurethatyourkernelversionis3.10.0-327.27orabove,otherwiseyoumayexperiencehangswithlargeworkloads.
RangerIntegration(Beta)
© Copyright Pivotal Software Inc, 2013-2017 18 2.3.0
https://issues.apache.org/jira/browse/HAWQ-1365https://issues.apache.org/jira/browse/HAWQ-1366https://issues.apache.org/jira/browse/HAWQ-1371https://issues.apache.org/jira/browse/HAWQ-1378https://issues.apache.org/jira/browse/HAWQ-1379https://issues.apache.org/jira/browse/HAWQ-1385https://issues.apache.org/jira/browse/HAWQ-1408https://issues.apache.org/jira/browse/HAWQ-1418https://hdb.docs.pivotal.io/220/hdb/install/HDB20xto22xUpgrade.htmlhttps://network.pivotal.io/products/pivotal-hdbhttps://network.pivotal.io/products/pivotal-hdbhttps://cwiki.apache.org/confluence/display/MADLIB/Installation+Guide
Beta-levelsupportofRangerPluginServicehasanumberofknownlimitations.Formoreinformation,refertoLimitationsofRangerPolicyManagement .
PXFGPSQL-3345-Totakeadvantageofthechangeinnumberofvirtualsegments,PXFexternaltablesmustbedroppedandrecreatedafterupdatingthedefault_hash_table_bucket_number serverconfigurationparameter.
GPSQL-3347-The LOCATION stringprovidedwhencreatingaPXFexternaltablemustuseonlyASCIIcharacterstoidentifyafilepath.Specifyingdouble-byteormulti-bytecharactersinafilepathreturnsthefollowingerror(formattedforclarity):
ERROR: remote component error (500) from 'IP_Address:51200': type Exception report message: File does not exist: /tmp/??????/ABC-??????-001.csv description: The server encountered an internal error that prevented it from fulfilling this request. exception: java.io.IOException: File does not exist: /tmp/??????/ABC-??????-001.csv (libchurl.c:897) (seg10 hdw2.hdp.local:40000 pid=389911) (dispatcher.c:1801)
PXFinaKerberos-securedclusterrequiresYARNtobeinstalledduetoadependencyonYARNlibraries.
InorderforPXFtointeroperatewithHBase,youmustmanuallyaddthePXFHBaseJARfiletotheHBaseclasspathafterinstallation.SeePost-InstallProcedureforHiveandHBaseonHDP .
HAWQ-974 -WhenusingcertainPXFprofilestoqueryagainstlargerfilesstoredinHDFS,usersmayoccasionallyexperiencehangingorquerytimeout.ThisisaknownissuethatwillbeimprovedinafutureHDBrelease.RefertoAddressingPXFMemoryIssues foradiscussionoftheconfigurationoptionsavailabletoaddresstheseissuesinyourPXFdeployment.
The HiveORC profilesupportsaggregatequeries(count,min,max,etc.),buttheyhavenotyetbeenoptimizedtoleverageORCfile-andstripe-levelmetadata.
The HiveORC profiledoesnotyetusetheVectorizedBatchreader.
PL/RTheHAWQPL/RextensionisprovidedasaseparateRPMinthe hdb-add-ons-2.2.0.0 repository.ThefilesinstalledbythisRPMareownedby root .IfyouinstalledHAWQviaAmbari,HAWQfilesareownedby gpadmin .PerformthefollowingstepsoneachnodeinyourHAWQclusterafterPL/RRPMinstallationtoaligntheownershipofPL/Rfiles:
root@hawq-node$ cd /usr/local/hawqroot@hawq-node$ chown gpadmin:gpadmin share/postgresql/contrib/plr.sql docs/contrib/README.plr lib/postgresql/plr.so
AmbariAmbari-managedclustersshouldonlyuseAmbariforsettingserverconfigurationparameters.Parametersmodifiedusingthe hawq config commandwillbeoverwrittenonAmbaristartuporreconfiguration.
WheninstallingHAWQinaKerberos-securedcluster,theinstallationprocessmayreportawarning/failureinAmbariiftheHAWQconfigurationforresourcemanagementtypeisswitchedtoYARNmodeduringinstallation.ThewarningisrelatedtoHAWQnotbeingabletoregisterwithYARNuntiltheHDFS&YARNservicesarerestartedwithnewconfigurationsresultingfromtheHAWQinstallationprocess.
TheHAWQstandbymasterwillnotworkafteryouchangetheHAWQmasterportnumber.Toenablethestandbymasteryoumustfirstremoveandthenre-initializeit.SeeRemovingtheHAWQStandbyMaster andActivatingtheHAWQStandbyMaster .
TheAmbariRe-SynchronizeHAWQStandbyMasterserviceactionfailsifthereisanactiveconnectiontotheHAWQmasternode.TheHAWQtaskoutputshowstheerror, Active connections. Aborting shutdown... Ifthisoccurs,closeallactiveconnectionsandthentrythere-synchronizeactionagain.
TheAmbariRunServiceCheckactionforHAWQandPXFmaynotworkproperlyonasecureclusterifPXFisnotco-locatedwiththeYARNcomponent.
Inasecuredcluster,ifyoumovetheYARNResourceManagertoanotherhostyoumustmanuallyupdate hadoop.proxyuser.yarn.hosts intheHDFScore-site.xml filetomatchthenewResourceManagerhostname.Ifyoudonotperformthisstep,HAWQsegmentsfailtogetresourcesfromtheResourceManager.
TheAmbariStopHAWQServer(ImmediateMode)serviceactionor hawq stop -M immediate commandmaynotstopallHAWQmasterprocessesinsomecases.Several postgres processesownedbythe gpadmin usermayremainactive.
Ambaricheckswhetherthe hawq_rm_yarn_address and hawq_rm_yarn_scheduler_address valuesarevalidwhenYARNHAisnotenabled.InclustersthatuseYARNHA,thesepropertiesarenotusedandmaygetout-of-syncwiththeactiveResourceManager.ThiscanleadingtofalsewarningsfromAmbariifyoutrytochangethepropertyvalue.
AmbaridoesnotsupportCustomConfigurationGroupswithHAWQ.
© Copyright Pivotal Software Inc, 2013-2017 19 2.3.0
https://hdb.docs.pivotal.io/220/hawq/ranger/ranger-overview.html#limitationshttps://hdb.docs.pivotal.io/220/hdb/install/install-ambari.html#post-install-pxfhttps://issues.apache.org/jira/browse/HAWQ-974https://hdb.docs.pivotal.io/220/hawq/pxf/TroubleshootingPXF.html#pxf-memcfghttps://hdb.docs.pivotal.io/220/hawq/admin/ambari-admin.html#amb-remove-standbyhttps://hdb.docs.pivotal.io/220/hawq/admin/ambari-admin.html#amb-activate-standby
CertainHAWQserverconfigurationparametersrelatedtoresourceenforcementarenotactive.ModifyingtheparametershasnoeffectinHAWQsincetheresourceenforcementfeatureisnotcurrentlysupported.Theseparametersinclude hawq_re_cgroup_hierarchy_name ,hawq_re_cgroup_mount_point ,and hawq_re_cpu_enable .TheseparametersappearintheAdvancedhawq-siteconfigurationsectionoftheAmbarimanagementinterface.
WorkaroundRequiredafterMovingNamenode
IfyouusetheAmbariMoveNamenodeWizardtomoveaHadoopnamenode,theWizarddoesnotautomaticallyupdatetheHAWQconfigurationtoreflectthechange.ThisleavesHAWQinannon-functionalstate,andwillcauseHAWQservicecheckstofailwithanerrorsimilarto:
2017-04-19 21:22:59,138 - SQL command executed failed: export PGPORT=5432 && source/usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\\\"CREATE TABLEambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\\\"Returncode: 1Stdout:Stderr: Warning: Permanently added 'ip-10-32-36-168.ore1.vpc.pivotal.io,10.32.36.168'(RSA) to the list of known hosts.WARNING: could not remove relation directory 16385/1/18366: Input/output errorCONTEXT: Dropping file-system object -- Relation Directory: '16385/1/18366'ERROR: could not create relation directoryhdfs://ip-10-32-36-168.ore1.vpc.pivotal.io:8020/hawq_default/16385/1/18366: Input/output error
2016-04-19 21:22:59,139 - SERVICE CHECK FAILED: HAWQ was not able to write and queryfrom a table2016-04-19 21:23:02,608 - ** FAILURE **: Service check failed 1 of 3 checksstdout: /var/lib/ambari-agent/data/output-281.txt
Toworkaroundthisproblem,performoneofthefollowingproceduresafteryoucompletetheMoveNamenodeWizard.
WorkaroundforNon-HANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UsetheAmbari config.sh utilitytoupdate hawq_dfs_url tothenewNameNodeaddress.SeetheModifyconfigurations ontheAmbariWikiformoreinformation.Forexample:
$ cd /var/lib/ambari-server/resources/scripts/$ ./configs.sh set {ambari_server_host} {clustername} hawq-site$ hawq_dfs_url {new_namenode_address}:{port}/hawq_default
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. Use ssh tologintoaHAWQnodeandrunthe checkpoint command:
$ psql -d template1 -c "checkpoint"
5. StoptheHAWQservice.
6. Themasterdatadirectoryisidentifiedinthe $GPHOME/etc/hawq-site.xml file hawq_master_directory propertyvalue.Copythemasterdatadirectorytoabackuplocation:
$ export MDATA_DIR=/value/from/hawqsite$ cp -r $MDATA_DIR /catalog/backup/location
7. ExecutethisquerytodisplayallavailableHAWQfilespaces:
8. SELECT fsname, fsedbid, fselocation FROM pg_filespace AS sp,pg_filespace_entry AS entry, pg_filesystem AS fs WHERE sp.fsfsys = fs.oidAND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid ORDER BYentry.fsedbid;
© Copyright Pivotal Software Inc, 2013-2017 20 2.3.0
https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
fsname | fsedbid | fselocation-------------+---------+------------------------------------------------cdbfast_fs_a | 0 | hdfs://hdfs-cluster/hawq//cdbfast_fs_adfs_system | 0 | hdfs://test5:9000/hawq/hawq-1459499690(2 rows)
9. Executethe hawq filespace commandoneachfilespacethatwasreturnedbythepreviousquery.Forexample:
$ hawq filespace --movefilespace dfs_system --location=hdfs://new_namenode:port/hawq/hawq-1459499690$ hawq filespace --movefilespace cdbfast_fs_a --location=hdfs://new_namenode:port/hawq//cdbfast_fs_a
10. IfyourclusterusesaHAWQstandbymaster,reinitializethestandbymasterinAmbariusingtheRemoveStandbyWizardfollowedbytheAddStandbyWizard.
11. StarttheHAWQService.
12. RunaHAWQservicechecktoensurethatalltestspass.
WorkaroundforHANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UseAmbaritoexpand Custom hdfs-client intheHAWQConfigstab,thenupdatethe dfs.namenode. propertiestomatchthecurrentNameNodeconfiguration.
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. RunaHAWQservicechecktoensurethatalltestspass.
© Copyright Pivotal Software Inc, 2013-2017 21 2.3.0
PivotalHDB2.1.2ReleaseNotesHDB2.1.2isaminorreleaseoftheproductandisbasedonApacheHAWQ2.1.0.0(Incubating) .Thisreleaseincludesbugfixes,enhancementstotheBetareleaseofOptimizedRowColumnar(ORC)fileformatsupport,andenhancementstothePXFHiveplug-in.
SupportedPlatformsThesupportedplatformforrunningPivotalHDB2.1.2comprises:
RedHatEnterpriseLinux(RHEL)6.4+(64-bit)(SeenoteinKnownIssuesandLimitationsforkernellimitations.)
HortonworksDataPlatform(HDP)2.5 .
Ambari2.4.1 orAmbari2.4.2 (forAmbari-basedinstallationandHAWQclustermanagement).
EachPivotalHDBhostmachinemustalsomeettheApacheHAWQ(Incubating)systemrequirements.SeeApacheHAWQSystemRequirements formoreinformation.
ProductSupportMatrixThefollowingtablesummarizesPivotalHDBproductsupportforcurrentandpreviousversionsofHDB,Hadoop,HAWQ,Ambari,andoperatingsystems.
PivotalHDBVersion
PXFVersion
HDPVersionRequirement
AmbariVersionRequirement
HAWQAmbariPlug-inRequirement
MADlibVersionRequirement
RHEL/CentOSVersionRequirement
SuSEVersionRequirement
2.1.2.0 3.2.0.0 2.5 2.4.1,2.4.2 2.1.2.0 1.9,1.9.1,1.10 6.4+(64-bit) n/a
2.1.1.0 3.1.1.0 2.5 2.4.1 2.1.1.0 1.9,1.9.1 6.4+(64-bit) n/a
2.1.0.0 3.1.0.0 2.5 2.4.1 2.1.0.0 1.9,1.9.1 6.4+(64-bit) n/a
2.0.1.0 3.0.1 2.4.0,2.4.2 2.2.2,2.4 2.0.1 1.9,1.9.1 6.4+(64-bit) n/a
2.0.0.0 3.0.0 2.3.4,2.4.0 2.2.2 2.0.0 1.9,1.9.1 6.4+(64-bit) n/a
1.3.1.1 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.1.0 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.3 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.2 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.1 1.3.3 2.2.4.2 1.7 1.11.7.1,1.8,1.9,1.9.1
6.4+ n/a
1.3.0.0 1.3.3 n/a n/a n/a1.7.1,1.8,1.9,1.9.1
n/a n/a
Note:RHEL/CentOS7isnotsupported.
Note:IfyouareusingAmbari2.4.1andyouwanttoinstallbothHDPandHAWQatthesametime,seeInstallingHDPandHDBwithAmbari2.4.1beforeyoubegin.
ProceduralLanguageSupportMatrixThefollowingtablesummarizescomponentversionsupportforProceduralLanguagesavailableinPivotalHDB2.x.TheversionslistedhavebeentestedwithHDB.Higherversionsmaybecompatible.Pleasetesthigherversionsthoroughlyinyournon-productionenvironmentsbeforedeployingtoproduction.
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
2.1.2.0 1.7 3.3.1 5.10.1 2.6.2
2.1.1.0 1.7 3.3.1 5.10.1 2.6.2
© Copyright Pivotal Software Inc, 2013-2017 22 2.3.0
http://hawq.incubator.apache.org/https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/ch_relnotes_v250.htmlhttp://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-release-notes/content/ch_relnotes-ambari-2.4.1.0.htmlhttp://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-release-notes/content/ch_relnotes-ambari-2.4.2.0.htmlhttps://hdb.docs.pivotal.io/212/hawq/requirements/system-requirements.html
2.1.1.0 1.7 3.3.1 5.10.1 2.6.2
2.1.0.0 1.7 3.3.1 5.10.1 2.6.2
2.0.1.0 1.7 3.3.1 5.10.1 2.6.2
2.0.0.0 1.6,1.7 3.1.0 5.10.1 2.6.2
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
AWSSupportRequirementsPivotalHDBissupportedonAmazonWebServices(AWS)serversusingeitherAmazonblocklevelInstancestore(Amazonusesthevolumenamesephemeral[0-23])orAmazonElasticBlockStore(AmazonEBS)storage.Uselong-runningEC2instanceswiththeseforlong-runningHAWQinstances,asSpotinstancescanbeinterrupted.IfusingSpotinstances,minimizeriskofdatalossbyloadingfromandexportingtoexternalstorage.
PivotalHDB2.1.2FeaturesandChangesPivotalHDB2.1.2isbasedonApacheHAWQ2.1.0.0(Incubating) ,andincludesthefollowingnewfeaturesascomparedtoPivotalHDB2.1.1.0:
ORCfileformatsupportenhancementsHDB2.1.2includesenhancementstotheOptimizedRowColumnar(ORC)fileformatsupportBetareleasedinHDB2.1.1.RefertotheORCBetadocumentation forspecificinformationrelatedtotheseenhancements.
PXFHivePlug-inenhancementsPXFnowselectstheoptimalHiveprofilefortheunderlyingfilestoragetypeofeachtable/partition/fragmentwhenrespondingtoaHAWQqueryofHivedata.TheoptimalprofileisselectedbothforexternaltableandHCatalogintegrationqueries.PXFimplementationchangesforthisimprovementrequireAmbari-managedclusterstoperformanadditionalpost-install/upgradeprocedure .
InstallingHDPandHDBwithAmbari2.4.1IfyouareusingAmbari2.4.1andyouwanttoinstallbothHDPandHAWQatthesametime,specialcaremustbetakenifyouwanttoinstalltheverylatestversionoftheHDPstackinsteadofthedefaultversion.Followthesesteps:
1. AfterinstallingAmbari,starttheClusterInstallWizardandproceeduntilyoureachtheSelectVersionscreen.
2. OntheSelectVersionscreen,selectHDB-2.5fromthelistofavailablestackversions.
3. WhilestillontheSelectVersionscreen,copytheBaseURLvaluesforthe HDP-2.5 and HDP-UTILS-1.1.0.21 repositoriesthatarelistedforyouroperatingsystem.Pastethesevaluesintoatemporaryfile;youwillneedtorestoretheseBaseURLvalueslater.
4. Usethedrop-downmenuforHDP-2.5toselectthestackoption,HDP-2.5(DefaultVersionDefinition).Verifythatthe hdb-2.1.2.0 andhdb-add-ons-2.1.2.0 repositoriesnowappearinthelistofRepositoriesforyouroperatingsystem.
5. ToinstalltheverylatestversionofHDP,replacetheBaseURLvaluesforthe HDP-2.5 and HDP-UTILS-1.1.0.21 repositorieswiththevaluesyoupastedintothetextfileinStep3.
6. ClickNexttocontinue,andfinishinstallingthenewHDPcluster.
7. InstallandconfigureHDPasdescribedinInstallingHAWQUsingAmbari .
Note:ThisworkaroundmaynotberequiredwithlaterversionsofAmbari2.4.
HDB2.1.2UpgradeHDB2.1.2upgradepaths:
TheUpgradingfromHDB2.1.x guideprovidesspecificdetailsonapplyingtheHDB2.1.2maintenancereleasetoyourHDB2.1.0orHDB2.1.1installation.
TheUpgradingfromHDB2.0.x guidedetailsthestepsinvolvedtoupgradeyourHDB2.0.xinstallationtoHDB2.1.2.
Note:IfyouareupgradinganHDBversionpriorto2.0,refertotheHDB2.0documentation .
© Copyright Pivotal Software Inc, 2013-2017 23 2.3.0
http://hawq.incubator.apache.org/https://hdb.docs.pivotal.io/212/hdb/releasenotes/orc-support-beta.htmlhttps://hdb.docs.pivotal.io/212/hdb/install/install-ambari.html#post-install-212-reqhttps://hdb.docs.pivotal.io/212/hdb/install/install-ambari.htmlhttps://hdb.docs.pivotal.io/212/hdb/install/HDB21xto21xUpgrade.htmlhttps://hdb.docs.pivotal.io/212/hdb/install/HDB20xto21xUpgrade.htmlhttp://hdb.docs.pivotal.io/200/hdb/index.html
DifferencesComparedtoApacheHAWQ(Incubating)PivotalHDB2.1.2includesallofthefunctionalityinApacheHAWQ2.1.0.0(Incubating) andadditionalbugfixes,notedbyanasterisk(*)intheResolvedIssuestablebelow.
ResolvedIssuesThefollowingHAWQandPXFissueswereresolvedinHDB2.1.2.
ApacheJira Component Summary
HAWQ-762 PXF HiveaggregationqueriesthroughPXFwouldsometimeshang
HAWQ-870 QueryExecution Allocatetarget’stupletableslotinPortalHeapMemoryduringsplitpartition
HAWQ-1177 HCatalog,PXF UseprofilebasedonfileformatinHCatalogintegrationforHiveORCprofile
HAWQ-1208 Interconnect Fixedrandominterconnectfailure
HAWQ-1214 ResourceManager Removedresource_parameters
HAWQ-1215 PXF PXFHiveORCprofiledidnothandlecomplextypescorrectly
HAWQ-1227 CommandLineTools HAWQinitwouldfailifusernamecontainscapitalcharacter
HAWQ-1228 HCatalog,PXF UseprofilebasedonfileformatinHCatalogintegration(HiveRC,HiveTextprofiles)
HAWQ-1229 CommandLineTools Removedunusedoptionin‘hawqconfig’helpmessage
HAWQ-1240 QueryExecution Fixedbuginplanrefinementforcursoroperation
HAWQ-1241 Core Noneedtosetext/pythonin*PATHinfilegreenplum_path.sh
HAWQ-1242 Core hawq-site.xmldefaultcontenthadwronggucvariablenames
HAWQ-1258 ResourceManager Segmentresourcemanagerdidnotswitchbackwhenitcouldnotresolvestandbyhostname
HAWQ-1282 Core SharedInputScanwouldresultinendlessloop
HAWQ-1285 ResourceManager Resourcemanagerwouldoutputuninitializedstringashostname
HAWQ-1286 Security Reducedunnecessarycallstonamespacecheckwhenrun\d
HAWQ-1308 PXF FixedJavadoccompilewarnings
HAWQ-1309 PXF PXFservicemustdefaulttoport51200anduserpxf
HAWQ-1314 Catalog,HCatalog,PXF Fixedpost-upgradepxf_get_item_fields()functionbreak
HAWQ-1315 ResourceManager FunctionvalidateResourcePoolStatus()inresourcepool.cloggedthewronginformation
HAWQ-1317 Security Ported“Fixsomeregexissueswithout-of-rangecharactersandlargecharranges”frompg
HAWQ-1321 Core failNamesincorrectlyusedmemorycontexttobuildmessagewhenANALYZEfailed
HAWQ-1324 QueryExecution Querycancelcausedthesegmenttogointocrashrecovery
HAWQ-1326 QueryExecution Cancelthequeryearlierifoneofthesegmentsforthequerycrashes
HAWQ-1334 Dispatcher QDthreadnowsetserrorcodeiffailingsothatthemainprocessforthequerycouldexitsoon
HAWQ-1338 Core Insomecaseswriterprocesscrashedwhenrunning'hawqstopcluster’
HAWQ-1345* Catalog Couldnotcountblocksofrelation:Notadirectory
HAWQ-1347* Dispatcher QDwouldchecksegmenthealthonly
Note*:HDB2.1.2includesresolvedissuesHAWQ-1345andHAWQ-1347,additionalbugfixesappliedtotheApacheHAWQ2.1.0.0(Incubating)release.
KnownIssuesandLimitations
MADlibCompressionPivotalHDB2.1.2iscompatiblewithMADlib1.9,1.9.1,and1.10.IfyouhaveanexistingHDBinstallationwithMADlib1.9.xinstalled,orareinstallingMADlib1.9.x,youmustdownloadandexecuteascripttoremoveMADlib’suseofQuicklzcompression,whichisnotsupportedinHDB2.1.2.RunthisscriptifyouareupgradinganHDBinstallationwithMADlib1.9.xtoHDB2.1.2,orifyouareinstallingMADlib1.9.xonHDB2.1.2.
© Copyright Pivotal Software Inc, 2013-2017 24 2.3.0
http://hawq.incubator.apache.org/https://issues.apache.org/jira/browse/HAWQ-762https://issues.apache.org/jira/browse/HAWQ-870https://issues.apache.org/jira/browse/HAWQ-1177https://issues.apache.org/jira/browse/HAWQ-1208https://issues.apache.org/jira/browse/HAWQ-1214https://issues.apache.org/jira/browse/HAWQ-1215https://issues.apache.org/jira/browse/HAWQ-1227https://issues.apache.org/jira/browse/HAWQ-1228https://issues.apache.org/jira/browse/HAWQ-1229https://issues.apache.org/jira/browse/HAWQ-1240https://issues.apache.org/jira/browse/HAWQ-1241https://issues.apache.org/jira/browse/HAWQ-1242https://issues.apache.org/jira/browse/HAWQ-1258https://issues.apache.org/jira/browse/HAWQ-1282https://issues.apache.org/jira/browse/HAWQ-1285https://issues.apache.org/jira/browse/HAWQ-1286https://issues.apache.org/jira/browse/HAWQ-1308https://issues.apache.org/jira/browse/HAWQ-1309https://issues.apache.org/jira/browse/HAWQ-1314https://issues.apache.org/jira/browse/HAWQ-1315https://issues.apache.org/jira/browse/HAWQ-1317https://issues.apache.org/jira/browse/HAWQ-1321https://issues.apache.org/jira/browse/HAWQ-1324https://issues.apache.org/jira/browse/HAWQ-1326https://issues.apache.org/jira/browse/HAWQ-1334https://issues.apache.org/jira/browse/HAWQ-1338https://issues.apache.org/jira/browse/HAWQ-1345https://issues.apache.org/jira/browse/HAWQ-1347
ThisprocedureisnotnecessaryifyouareusingorinstallingMADlib1.10,orifyouhavepreviouslydisabledQuicklzcompression.
IfyouareupgradinganHDB2.0systemthatcontainsMADlib:
1. CompletethePivotalHDB2.1.2upgradeprocedureasdescribedinUpgradingtoPivotalHDB2.1.2 .
2. DownloadandunpacktheMADlib1.9.1binarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
3. Executethe remove_compression.sh scriptintheMADlib1.9.1distribution,providingthepathtoyourexistingMADlibinstallation:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
FornewMADlibinstallations,completethesestepsafteryouinstallPivotalHDB2.1.2:
1. DownloadandunpacktheMADlib1.9.1binarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
2. InstalltheMADlib .gppkg file:
$ gppkg -i /madlib-ossv1.9.1_pv1.9.6_hawq2.1-rhel5-x86_64.gppkg
3. Executethe remove_compression.sh script,optionallyprovidingtheMADlibinstallationpath:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
4. ContinueinstallingMADlibusingthe madpack install commandasdescribedintheMADlibInstallationGuide .Forexample:
$ madpack –p hawq install
OperatingSystemSomeLinuxkernelversionsbetween2.6.32to4.3.3(notincluding2.6.32and4.3.3)haveabugthatcouldintroducea getaddrinfo() functionhang.Toavoidthisissue,upgradethekerneltoversion4.3.3+.
PXFGPSQL-3345-Totakeadvantageofthechangeinnumberofvirtualsegments,PXFexternaltablesmustbedroppedandrecreatedafterupdatingthedefault_hash_table_bucket_number serverconfigurationparameter.
GPSQL-3347-The LOCATION stringprovidedwhencreatingaPXFexternaltablemustuseonlyASCIIcharacterstoidentifyafilepath.Specifyingdouble-byteormulti-bytecharactersinafilepathreturnsthefollowingerror(formattedforclarity):
ERROR: remote component error (500) from 'IP_Address:51200': type Exception report message: File does not exist: /tmp/??????/ABC-??????-001.csv description: The server encountered an internal error that prevented it from fulfilling this request. exception: java.io.IOException: File does not exist: /tmp/??????/ABC-??????-001.csv (libchurl.c:897) (seg10 hdw2.hdp.local:40000 pid=389911) (dispatcher.c:1801)
ORC-RefertoORCKnownIssuesandLimitations foralistofknownissuesrelatedtotheORCBeta.
PXFinaKerberos-securedclusterrequiresYARNtobeinstalledduetoadependencyonYARNlibraries.
InorderforPXFtointeroperatewithHBase,youmustmanuallyaddthePXFHBaseJARfiletotheHBaseclasspathafterinstallation.SeePost-InstallProcedureforHiveandHBaseonHDP .
HAWQ-974 -WhenusingcertainPXFprofilestoqueryagainstlargerfilesstoredinHDFS,usersmayoccasionallyexperiencehangingorquerytimeout.ThisisaknownissuethatwillbeimprovedinafutureHDBrelease.RefertoAddressingPXFMemoryIssues foradiscussionoftheconfigurationoptionsavailabletoaddresstheseissuesinyourPXFdeployment.
PL/RTheHAWQPL/RextensionisprovidedasaseparateRPMinthe hdb-add-ons-2.1.2.0 repository.ThefilesinstalledbythisRPMareownedby root .Ifyou
© Copyright Pivotal Software Inc, 2013-2017 25 2.3.0
https://hdb.docs.pivotal.io/212/hdb/install/HDB20xto21xUpgrade.htmlhttps://network.pivotal.io/products/pivotal-hdbhttps://network.pivotal.io/products/pivotal-hdbhttps://cwiki.apache.org/confluence/display/MADLIB/Installation+Guidehttps://hdb.docs.pivotal.io/212/hdb/releasenotes/orc-support-beta.html#hiveorc-known-issueshttps://hdb.docs.pivotal.io/212/hdb/install/install-ambari.html#post-install-pxfhttps://issues.apache.org/jira/browse/HAWQ-974https://hdb.docs.pivotal.io/212/hawq/pxf/TroubleshootingPXF.html#pxf-memcfg
installedHAWQviaAmbari,HAWQfilesareownedby gpadmin .PerformthefollowingstepsoneachnodeinyourHAWQclusterafterPL/RRPMinstallationtoaligntheownershipofPL/Rfiles:
root@hawq-node$ cd /usr/local/hawqroot@hawq-node$ chown gpadmin:gpadmin share/postgresql/contrib/plr.sql docs/contrib/README.plr lib/postgresql/plr.so
AmbariAmbari-managedclustersshouldonlyuseAmbariforsettingserverconfigurationparameters.Parametersmodifiedusingthe hawq config commandwillbeoverwrittenonAmbaristartuporreconfiguration.
Incertainconfigurations,theHAWQMastermayfailtostartinAmbariversionspriorto2.4.2when webhdfs isdisabled.RefertoAMBARI-18837 .Toworkaroundthisissue,enable webhdfs bysetting dfs.webhdfs.enabled to True in hdfs-site.xml ,orcontactSupport.
WheninstallingHAWQinaKerberos-securedcluster,theinstallationprocessmayreportawarning/failureinAmbariiftheHAWQconfigurationforresourcemanagementtypeisswitchedtoYARNmodeduringinstallation.ThewarningisrelatedtoHAWQnotbeingabletoregisterwithYARNuntiltheHDFS&YARNservicesarerestartedwithnewconfigurationsresultingfromtheHAWQinstallationprocess.
TheHAWQstandbymasterwillnotworkafteryouchangetheHAWQmasterportnumber.Toenablethestandbymasteryoumustfirstremoveandthenre-initializeit.SeeRemovingtheHAWQStandbyMaster andActivatingtheHAWQStandbyMaster .
TheAmbariRe-SynchronizeHAWQStandbyMasterserviceactionfailsifthereisanactiveconnectiontotheHAWQmasternode.TheHAWQtaskoutputshowstheerror, Active connections. Aborting shutdown... Ifthisoccurs,closeallactiveconnectionsandthentrythere-synchronizeactionagain.
TheAmbariRunServiceCheckactionforHAWQandPXFmaynotworkproperlyonasecureclusterifPXFisnotco-locatedwiththeYARNcomponent.
Inasecuredcluster,ifyoumovetheYARNResourceManagertoanotherhostyoumustmanuallyupdate hadoop.proxyuser.yarn.hosts intheHDFScore-site.xml filetomatchthenewResourceManagerhostname.Ifyoudonotperformthisstep,HAWQsegmentsfailtogetresourcesfromtheResourceManager.
TheAmbariStopHAWQServer(ImmediateMode)serviceactionor hawq stop -M immediate commandmaynotstopallHAWQmasterprocessesinsomecases.Several postgres processesownedbythe gpadmin usermayremainactive.
Ambaricheckswhetherthe hawq_rm_yarn_address and hawq_rm_yarn_scheduler_address valuesarevalidwhenYARNHAisnotenabled.InclustersthatuseYARNHA,thesepropertiesarenotusedandmaygetout-of-syncwiththeactiveResourceManager.ThiscanleadingtofalsewarningsfromAmbariifyoutrytochangethepropertyvalue.
AmbaridoesnotsupportCustomConfigurationGroupswithHAWQ.
CertainHAWQserverconfigurationparametersrelatedtoresourceenforcementarenotactive.ModifyingtheparametershasnoeffectinHAWQsincetheresourceenforcementfeatureisnotcurrentlysupported.Theseparametersinclude hawq_re_cgroup_hierarchy_name ,hawq_re_cgroup_mount_point ,and hawq_re_cpu_enable .TheseparametersappearintheAdvancedhawq-siteconfigurationsectionoftheAmbarimanagementinterface.
WorkaroundRequiredafterMovingNamenode
IfyouusetheAmbariMoveNamenodeWizardtomoveaHadoopnamenode,theWizarddoesnotautomaticallyupdatetheHAWQconfigurationtoreflectthechange.ThisleavesHAWQinannon-functionalstate,andwillcauseHAWQservicecheckstofailwithanerrorsimilarto:
2017-04-19 21:22:59,138 - SQL command executed failed: export PGPORT=5432 && source/usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\\\"CREATE TABLEambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\\\"Returncode: 1Stdout:Stderr: Warning: Permanently added 'ip-10-32-36-168.ore1.vpc.pivotal.io,10.32.36.168'(RSA) to the list of known hosts.WARNING: could not remove relation directory 16385/1/18366: Input/output errorCONTEXT: Dropping file-system object -- Relation Directory: '16385/1/18366'ERROR: could not create relation directoryhdfs://ip-10-32-36-168.ore1.vpc.pivotal.io:8020/hawq_default/16385/1/18366: Input/output error
2016-04-19 21:22:59,139 - SERVICE CHECK FAILED: HAWQ was not able to write and queryfrom a table2016-04-19 21:23:02,608 - ** FAILURE **: Service check failed 1 of 3 checksstdout: /var/lib/ambari-agent/data/output-281.txt
Toworkaroundthisproblem,performoneofthefollowingproceduresafteryoucompletetheMoveNamenodeWizard.
© Copyright Pivotal Software Inc, 2013-2017 26 2.3.0
https://issues.apache.org/jira/browse/AMBARI-18837https://hdb.docs.pivotal.io/212/hawq/admin/ambari-admin.html#amb-remove-standbyhttps://hdb.docs.pivotal.io/212/hawq/admin/ambari-admin.html#amb-activate-standby
WorkaroundforNon-HANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UsetheAmbari config.sh utilitytoupdate hawq_dfs_url tothenewNameNodeaddress.SeetheModifyconfigurations ontheAmbariWikiformoreinformation.Forexample:
$ cd /var/lib/ambari-server/resources/scripts/$ ./configs.sh set {ambari_server_host} {clustername} hawq-site$ hawq_dfs_url {new_namenode_address}:{port}/hawq_default
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. Use ssh tologintoaHAWQnodeandrunthe checkpoint command:
$ psql -d template1 -c "checkpoint"
5. StoptheHAWQservice.
6. Themasterdatadirectoryisidentifiedinthe $GPHOME/etc/hawq-site.xml file hawq_master_directory propertyvalue.Copythemasterdatadirectorytoabackuplocation:
$ export MDATA_DIR=/value/from/hawqsite$ cp -r $MDATA_DIR /catalog/backup/location
7. ExecutethisquerytodisplayallavailableHAWQfilespaces:
8. SELECT fsname, fsedbid, fselocation FROM pg_filespace AS sp,pg_filespace_entry AS entry, pg_filesystem AS fs WHERE sp.fsfsys = fs.oidAND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid ORDER BYentry.fsedbid;
fsname | fsedbid | fselocation-------------+---------+------------------------------------------------cdbfast_fs_a | 0 | hdfs://hdfs-cluster/hawq//cdbfast_fs_adfs_system | 0 | hdfs://test5:9000/hawq/hawq-1459499690(2 rows)
9. Executethe hawq filespace commandoneachfilespacethatwasreturnedbythepreviousquery.Forexample:
$ hawq filespace --movefilespace dfs_system --location=hdfs://new_namenode:port/hawq/hawq-1459499690$ hawq filespace --movefilespace cdbfast_fs_a --location=hdfs://new_namenode:port/hawq//cdbfast_fs_a
10. IfyourclusterusesaHAWQstandbymaster,reinitializethestandbymasterinAmbariusingtheRemoveStandbyWizardfollowedbytheAddStandbyWizard.
11. StarttheHAWQService.
12. RunaHAWQservicechecktoensurethatalltestspass.
WorkaroundforHANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UseAmbaritoexpand Custom hdfs-client intheHAWQConfigstab,thenupdatethe dfs.namenode. propertiestomatchthecurrentNameNodeconfiguration.
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. RunaHAWQservicechecktoensurethatalltestspass.
© Copyright Pivotal Software Inc, 2013-2017 27 2.3.0
https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
PivotalHDB2.1.1ReleaseNotesHDB2.1.1includesaBetareleaseofOptimizedRowColumnar(ORC)fileformatsupportforHAWQ.
SupportedPlatformsThesupportedplatformforrunningPivotalHDB2.1.1comprises:
RedHatEnterpriseLinux(RHEL)6.4+(64-bit)(SeenoteinKnownIssuesandLimitationsforkernellimitations.)
HortonworksDataPlatform(HDP)2.5 .
Ambari2.4.1 (forAmbari-basedinstallationandHAWQclustermanagement).
EachPivotalHDBhostmachinemustalsomeettheApacheHAWQ(Incubating)systemrequirements.SeeApacheHAWQSystemRequirements formoreinformation.
ProductSupportMatrixThefollowingtablesummarizesPivotalHDBproductsupportforcurrentandpreviousversionsofHDB,Hadoop,HAWQ,Ambari,andoperatingsystems.
PivotalHDBVersion
PXFVersion
HDPVersionRequirement
AmbariVersionRequirement
HAWQAmbariPlug-inRequirement
MADlibVersionRequirement
RHEL/CentOSVersionRequirement
SuSEVersionRequirement
2.1.1.0 3.1.1 2.5 2.4.1 2.1.1 1.9,1.9.1 6.4+(64-bit) n/a
2.1.0.0 3.1.0 2.5 2.4.1 2.1.0 1.9,1.9.1 6.4+(64-bit) n/a
2.0.1.0 3.0.1 2.4.0,2.4.2 2.2.2,2.4 2.0.1 1.9,1.9.1 6.4+(64-bit) n/a
2.0.0.0 3.0.0 2.3.4,2.4.0 2.2.2 2.0.0 1.9,1.9.1 6.4+(64-bit) n/a
1.3.1.1 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.1.0 2.5.1.1 2.2.6 2.0.x 1.3.11.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.3 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.2 1.3.3 2.2.4.2 1.7 1.21.7.1,1.8,1.9,1.9.1
6.4+ SLES11SP3
1.3.0.1 1.3.3 2.2.4.2 1.7 1.11.7.1,1.8,1.9,1.9.1
6.4+ n/a
1.3.0.0 1.3.3 n/a n/a n/a1.7.1,1.8,1.9,1.9.1
n/a n/a
Note:RHEL/CentOS7isnotsupported.
Note:IfyouareusingAmbari2.4.1andyouwanttoinstallbothHDPandHAWQatthesametime,seeInstallingHDPandHDBwithAmbari2.4.1beforeyoubegin.
ProceduralLanguageSupportMatrixThefollowingtablesummarizescomponentversionsupportforProceduralLanguagesavailableinPivotalHDB2.x.TheversionslistedhavebeentestedwithHDB.Higherversionsmaybecompatible.Pleasetesthigherversionsthoroughlyinyournon-productionenvironmentsbeforedeployingtoproduction.
PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
2.1.1.0 1.7 3.3.1 5.10.1 2.6.2
2.1.0.0 1.7 3.3.1 5.10.1 2.6.2
2.0.1.0 1.7 3.3.1 5.10.1 2.6.2
2.0.0.0 1.6,1.7 3.1.0 5.10.1 2.6.2
© Copyright Pivotal Software Inc, 2013-2017 28 2.3.0
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/ch_relnotes_v250.htmlhttp://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-release-notes/content/ch_relnotes-ambari-2.4.1.0.htmlhttps://hdb.docs.pivotal.io/211/hawq/requirements/system-requirements.html
2.0.0.0 1.6,1.7 3.1.0 5.10.1 2.6.2PivotalHDBVersion
PL/JavaJavaVersionRequirement
PL/RRVersionRequirement
PL/PerlPerlVersionRequirement
PL/PythonPythonVersionRequirement
AWSSupportRequirementsPivotalHDBissupportedonAmazonWebServices(AWS)serversusingeitherAmazonblocklevelInstancestore(Amazonusesthevolumenamesephemeral[0-23])orAmazonElasticBlockStore(AmazonEBS)storage.Uselong-runningEC2instanceswiththeseforlong-runningHAWQinstances,asSpotinstancescanbeinterrupted.IfusingSpotinstances,minimizeriskofdatalossbyloadingfromandexportingtoexternalstorage.
PivotalHDB2.1.1FeaturesandChangesPivotalHDB2.1.1isbasedonApacheHAWQ(Incubating),andincludesthefollowingnewfeaturesascomparedtoPivotalHDB2.1.0.0:
ORCfileformatsupportHDB2.1.1includesaBetareleaseofOptimizedRowColumnar(ORC)fileformatsupport.RefertotheORCBetadocumentation forspecificinformationrelatedtothisnewfeature.
InstallingHDPandHDBwithAmbari2.4.1IfyouareusingAmbari2.4.1andyouwanttoinstallbothHDPandHAWQatthesametime,specialcaremustbetakenifyouwanttoinstalltheverylatestversionoftheHDPstackinsteadofthedefaultversion.Followthesesteps:
1. AfterinstallingAmbari,starttheClusterInstallWizardandproceeduntilyoureachtheSelectVersionscreen.
2. OntheSelectVersionscreen,selectHDB-2.5fromthelistofavailablestackversions.
3. WhilestillontheSelectVersionscreen,copytheBaseURLvaluesforthe HDP-2.5 and HDP-UTILS-1.1.0.21 repositoriesthatarelistedforyouroperatingsystem.Pastethesevaluesintoatemporaryfile;youwillneedtorestoretheseBaseURLvalueslater.
4. Usethedrop-downmenuforHDP-2.5toselectthestackoption,HDP-2.5(DefaultVersionDefinition).Verifythatthe hdb-2.1.1.0 andhdb-add-ons-2.1.1.0 repositoriesnowappearinthelistofRepositoriesforyouroperatingsystem.
5. ToinstalltheverylatestversionofHDP,replacetheBaseURLvaluesforthe HDP-2.5 and HDP-UTILS-1.1.0.21 repositorieswiththevaluesyoupastedintothetextfileinStep3.
6. ClickNexttocontinue,andfinishinstallingthenewHDPcluster.
7. InstallandconfigureHDPasdescribedinInstallingHAWQUsingAmbari .
Note:ThisworkaroundmaynotberequiredwithlaterversionsofAmbari2.4.
HDB2.1.xUpgradeHDB2.1.1upgradepaths:
TheUpgradingfromHDB2.1.0 guideprovidesspecificdetailsonapplyingtheHDB2.1.1maintenancereleasetoyourHDB2.1.0installation.
TheUpgradingfromHDB2.0.x guidedetailsthestepsinvolvedtoupgradeyourHDB2.0.xinstallationtoHDB2.1.1.
Note:IfyouareupgradinganHDBversionpriorto2.0,refertotheHDB2.0documentation .
DifferencesComparedtoApacheHAWQ(Incubating)PivotalHDB2.1.1includesallofthefunctionalityinApacheHAWQ(Incubating) ,andaddsseveralbugfixesdescribedbelow.
ResolvedIssuesThefollowingHAWQandPXFissueswereresolvedinHDB2.1.1.
© Copyright Pivotal Software Inc, 2013-2017 29 2.3.0
https://hdb.docs.pivotal.io/211/hdb/releasenotes/orc-support-beta.htmlhttps://hdb.docs.pivotal.io/211/hdb/install/install-ambari.htmlhttps://hdb.docs.pivotal.io/211/hdb/install/HDB210to21xUpgrade.htmlhttps://hdb.docs.pivotal.io/211/hdb/install/HDB20xto210Upgrade.htmlhttp://hdb.docs.pivotal.io/200/hdb/index.htmlhttp://hawq.incubator.apache.org/
ApacheJira Component Summary
HAWQ-583
PXFExtendedPXFtoenablepluginstosupportreturningpartialcontentfrom SELECT(column projection)statements
HAWQ-779
PXF SupportmorePXFfilterpushdown
HAWQ-931
PXF HiveORCAccessorwithsupportforPredicatepushdownandColumnProjection
HAWQ-964
PXF SupportforadditionallogicaloperatorsinPXF
HAWQ-1103
PXF FixtosendconstantdatatypeandlengthinfilterstringtoPXFservice
HAWQ-1111
PXF SupportforIN()operatorinPXF
HAWQ-1191
PXF GetridofDELIMITERpropertyforHiveORCprofile
HAWQ-1196
PXF PartitionedtablessupportforHiveORCprofile
HAWQ-1213
CommandLineTools
Incorrectcheckof hawq register incaseofrandomlydistributedtablewithnon-defaultdefault_hash_table_bucket_number value
HAWQ-1221
CommandLineTools
hawq register shoulderroroutwhenregisteringaYMLfilethatdoesn’texist
KnownIssuesandLimitations
MADlibCompressionPivotalHDB2.1.1iscompatiblewithMADlib1.9and1.9.1.However,youmustdownloadandexecuteascriptinordertoremovetheMADlibQuicklzcompression,whichisnotsupportedinHDB2.1.1.RunthisscriptifyouareupgradingtoHDB2.1.1,orifyouareinstallingMADlibonHDB2.1.1.
IfyouareupgradinganHDB2.0systemthatcontainsMADlib:
1. CompletethePivotalHDB2.1.1upgradeprocedureasdescribedinUpgradingtoPivotalHDB2.1.1 .
2. DownloadandunpacktheMADlib1.9.1binarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
3. Executethe remove_compression.sh scriptintheMADlib1.9.1distribution,providingthepathtoyourexistingMADlibinstallation:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
FornewMADlibinstallations,completethesestepsafteryouinstallPivotalHDB2.1.1:
1. DownloadandunpacktheMADlib1.9.1binarydistributionfromthePivotalHDBDownloadPage onPivotalNetwork.
2. InstalltheMADlib .gppkg file:
$ gppkg -i /madlib-ossv1.9.1_pv1.9.6_hawq2.1-rhel5-x86_64.gppkg
3. Executethe remove_compression.sh script,optionallyprovidingtheMADlibinstallationpath:
$ remove_compression.sh --prefix
Note:Ifyoudonotincludethe --prefix option,thescriptusesthelocation ${GPHOME}/madlib .
4. ContinueinstallingMADlibusingthe madpack install commandasdescribedintheMADlibInstallationGuide .Forexample:
$ madpack –p hawq install
© Copyright Pivotal Software Inc, 2013-2017 30 2.3.0
https://issues.apache.org/jira/browse/HAWQ-583https://issues.apache.org/jira/browse/HAWQ-779https://issues.apache.org/jira/browse/HAWQ-931https://issues.apache.org/jira/browse/HAWQ-964https://issues.apache.org/jira/browse/HAWQ-1103https://issues.apache.org/jira/browse/HAWQ-1111https://issues.apache.org/jira/browse/HAWQ-1191https://issues.apache.org/jira/browse/HAWQ-1196https://issues.apache.org/jira/browse/HAWQ-1213https://issues.apache.org/jira/browse/HAWQ-1221https://hdb.docs.pivotal.io/211/hdb/install/HDB20xto210Upgrade.htmlhttps://network.pivotal.io/products/pivotal-hdbhttps://network.pivotal.io/products/pivotal-hdbhttps://cwiki.apache.org/confluence/display/MADLIB/Installation+Guide
OperatingSystemSomeLinuxkernelversionsbetween2.6.32to4.3.3(notincluding2.6.32and4.3.3)haveabugthatcouldintroducea getaddrinfo() functionhang.Toavoidthisissue,upgradethekerneltoversion4.3.3+.
PXFORC-RefertoORCKnownIssuesandLimitations foralistofknownissuesrelatedtotheORCBeta.
PXFinaKerberos-securedclusterrequiresYARNtobeinstalledduetoadependencyonYARNlibraries.
InorderforPXFtointeroperatewithHBase,youmustmanuallyaddthePXFHBaseJARfiletotheHBaseclasspathafterinstallation.SeePost-InstallProcedureforHiveandHBaseonHDP .
HAWQ-974 -WhenusingcertainPXFprofilestoqueryagainstlargerfilesstoredinHDFS,usersmayoccasionallyexperiencehangingorquerytimeout.ThisisaknownissuethatwillbeimprovedinafutureHDBrelease.RefertoAddressingPXFMemoryIssues foradiscussionoftheconfigurationoptionsavailabletoaddresstheseissuesinyourPXFdeployment.
AfterupgradingfromHDBversion2.0.0,HCatalogaccessthroughPXFmayfailwiththefollowingerror:
postgres=# \d hcatalog.default.hive_tableERROR: function return row and query-specified return row do not matchDETAIL: Returned row contains 5 attributes, but query expects 4.
TorestoreHCatalogaccess,youmustupdatethePXF pxf_get_item_fields() functiondefinition.PerformthisprocedureonlyifyouupgradedfromHDB2.0.0.
1. LogintheHAWQmasternodeandstartthe psql subsystem:
$ ssh gpadmin@mastergpadmin@master$ psql -d postgres
2. Listallbutthe hcatalog and template0 databases:
postgres=# SELECT datname FROM pg_database WHERE NOT datname IN ('hcatalog', 'template0');
3. RunthefollowingcommandsoneachdatabaseidentifiedinStep2toupdatethe pxf_get_item_fields() functiondefinition:
postgres=# CONNECT ;postgres=# SET allow_system_table_mods = 'dml';postgres=# UPDATE pg_proc SET proallargtypes = '{25,25,25,25,25,25,25}', proargmodes = '{i,i,o,o,o,o,o}', proargnames = '{profile,pattern,path,itemname,fieldname,fieldtype,sourcefieldtype}' WHERE proname = 'pxf_get_item_fields';
4. Resetyour psql session:
postgres=# RESET allow_system_table_mods;
Note:Usethe allow_system_table_mods serverconfigurationparameterandidentifiedSQLcommandsonlyinthecontextofthisworkaround.Theyarenototherwisesupported.
PL/RTheHAWQPL/RextensionisprovidedasaseparateRPMinthe hdb-add-ons-2.1.1.0 repository.ThefilesinstalledbythisRPMareownedby root .IfyouinstalledHAWQviaAmbari,HAWQfilesareownedby gpadmin .PerformthefollowingstepsoneachnodeinyourHAWQclusterafterPL/RRPMinstallationtoaligntheownershipofPL/Rfiles:
root@hawq-node$ cd /usr/local/hawqroot@hawq-node$ chown gpadmin:gpadmin share/postgresql/contrib/plr.sql docs/contrib/README.plr lib/postgresql/plr.so
Ambari
© Copyright Pivotal Software Inc, 2013-2017 31 2.3.0
https://hdb.docs.pivotal.io/211/hdb/releasenotes/orc-support-beta.html#hiveorc-known-issueshttps://hdb.docs.pivotal.io/211/hdb/install/install-ambari.html#post-install-pxfhttps://issues.apache.org/jira/browse/HAWQ-974https://hdb.docs.pivotal.io/211/hawq/pxf/TroubleshootingPXF.html#pxf-memcfg
Ambari-managedclustersshouldonlyuseAmbariforsettingsystemparameters.Parametersmodifiedusingthe hawq config commandwillbeoverwrittenonAmbaristartuporreconfiguration.
Incertainconfigurations,theHAWQMastermayfailtostartinAmbariversionspriorto2.4.2when webhdfs isdisabled.RefertoAMBARI-18837 .Toworkaroundthisissue,enable webhdfs bysetting dfs.webhdfs.enabled to True in hdfs-site.xml ,orcontactSupport.
WheninstallingHAWQinaKerberos-securedcluster,theinstallationprocessmayreportawarning/failureinAmbariiftheHAWQconfigurationforresourcemanagementtypeisswitchedtoYARNmodeduringinstallation.ThewarningisrelatedtoHAWQnotbeingabletoregisterwithYARNuntiltheHDFS&YARNservicesarerestartedwithnewconfigurationsresultingfromtheHAWQinstallationprocess.
TheHAWQstandbymasterwillnotworkafteryouchangetheHAWQmasterportnumber.Toenablethestandbymasteryoumustfirstremoveandthenre-initializeit.SeeRemovingtheHAWQStandbyMaster andActivatingtheHAWQStandbyMaster .
TheAmbariRe-SynchronizeHAWQStandbyMasterserviceactionfailsifthereisanactiveconnectiontotheHAWQmasternode.TheHAWQtaskoutputshowstheerror, Active connections. Aborting shutdown... Ifthisoccurs,closeallactiveconnectionsandthentrythere-synchronizeactionagain.
TheAmbariRunServiceCheckactionforHAWQandPXFmaynotworkproperlyonasecureclusterifPXFisnotco-locatedwiththeYARNcomponent.
Inasecuredcluster,ifyoumovetheYARNResourceManagertoanotherhostyoumustmanuallyupdate hadoop.proxyuser.yarn.hosts intheHDFScore-site.xml filetomatchthenewResourceManagerhostname.Ifyoudonotperformthisstep,HAWQsegmentsfailtogetresourcesfromtheResourceManager.
TheAmbariStopHAWQServer(ImmediateMode)serviceactionor hawq stop -M immediate commandmaynotstopallHAWQmasterprocessesinsomecases.Several postgres processesownedbythe gpadmin usermayremainactive.
Ambaricheckswhetherthe hawq_rm_yarn_address and hawq_rm_yarn_scheduler_address valuesarevalidwhenYARNHAisnotenabled.InclustersthatuseYARNHA,thesepropertiesarenotusedandmaygetout-of-syncwiththeactiveResourceManager.ThiscanleadingtofalsewarningsfromAmbariifyoutrytochangethepropertyvalue.
AmbaridoesnotsupportCustomConfigurationGroupswithHAWQ.
CertainHAWQserverconfigurationparametersrelatedtoresourceenforcementarenotactive.ModifyingtheparametershasnoeffectinHAWQsincetheresourceenforcementfeatureisnotcurrentlysupported.Theseparametersinclude hawq_re_cgroup_hierarchy_name ,hawq_re_cgroup_mount_point ,and hawq_re_cpu_enable .TheseparametersappearintheAdvancedhawq-siteconfigurationsectionoftheAmbarimanagementinterface.
WorkaroundRequiredafterMovingNamenode
IfyouusetheAmbariMoveNamenodeWizardtomoveaHadoopnamenode,theWizarddoesnotautomaticallyupdatetheHAWQconfigurationtoreflectthechange.ThisleavesHAWQinannon-functionalstate,andwillcauseHAWQservicecheckstofailwithanerrorsimilarto:
2017-04-19 21:22:59,138 - SQL command executed failed: export PGPORT=5432 && source/usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\\\"CREATE TABLEambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\\\"Returncode: 1Stdout:Stderr: Warning: Permanently added 'ip-10-32-36-168.ore1.vpc.pivotal.io,10.32.36.168'(RSA) to the list of known hosts.WARNING: could not remove relation directory 16385/1/18366: Input/output errorCONTEXT: Dropping file-system object -- Relation Directory: '16385/1/18366'ERROR: could not create relation directoryhdfs://ip-10-32-36-168.ore1.vpc.pivotal.io:8020/hawq_default/16385/1/18366: Input/output error
2016-04-19 21:22:59,139 - SERVICE CHECK FAILED: HAWQ was not able to write and queryfrom a table2016-04-19 21:23:02,608 - ** FAILURE **: Service check failed 1 of 3 checksstdout: /var/lib/ambari-agent/data/output-281.txt
Toworkaroundthisproblem,performoneofthefollowingproceduresafteryoucompletetheMoveNamenodeWizard.
WorkaroundforNon-HANameNodeClusters:1. PerformanHDFSservicechecktoensurethatHDFSisrunningproperlyafteryoumovedtheNameNode.
2. UsetheAmbari config.sh utilitytoupdate hawq_dfs_url tothenewNameNodeaddress.SeetheModifyconfigurations ontheAmbariWikiformoreinformation.Forexample:
$ cd /var/lib/ambari-server/resources/scripts/$ ./configs.sh set {ambari_server_host} {clustername} hawq-site$ hawq_dfs_url {new_namenode_address}:{port}/hawq_default
© Copyright Pivotal Software Inc, 2013-2017 32 2.3.0
https://issues.apache.org/jira/browse/AMBARI-18837https://hdb.docs.pivotal.io/211/hawq/admin/ambari-admin.html#amb-remove-standbyhttps://hdb.docs.pivotal.io/211/hawq/admin/ambari-admin.html#amb-activate-standbyhttps://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
3. RestarttheHAWQconfigurationtoapplytheconfigurationchange.
4. Use ssh tologintoaHAWQnodeandrunthe checkpoint command:
$ psql -d template1 -c "checkpoint"
5. StoptheHAWQservice.
6. Themasterdatadirectoryisidentifiedinthe $GPHOME/etc/hawq-site.xml file hawq_master_directory propertyvalue.Copythemasterdatadirectorytoabackuplocation:
$ export MDATA_DIR=/value/from/hawqsite$ cp -r $MDATA_DIR /catalog/backup/location
7. ExecutethisquerytodisplayallavailableHAWQfilespaces:
8. SELECT fsname, fsedbid, fselocation FROM pg_filespace as sp,pg_filespace_entry as entry, pg_filesystem as fs WHERE sp.fsfsys = fs.oidand fs.fsysname = 'hdfs' and sp.oid = entry.fsefsoid ORDER BYentry.fsedbid;
fsname | fsedbid | fselocation-------------+---------+------------------------------------------------cdbfast_fs_a | 0 | hdfs://hdfs-cluster/hawq//cdbfast_fs_adfs_system
Top Related