1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
| 2023-03-08 11:48:20,944 - The 'hadoop-hdfs-datanode' component did not advertise a version. This may indicate a problem with the component packaging. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/datanode.py", line 126, in <module> DataNode().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/datanode.py", line 45, in install self.install_packages(env) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 839, in install_packages name = self.format_package_name(package['name']) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 562, in format_package_name return self.get_package_from_available(name) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 529, in get_package_from_available raise Fail("No package found for {0}(expected name: {1})".format(name, name_with_version)) resource_management.core.exceptions.Fail: No package found for hadoop_${stack_version}(expected name: hadoop_3_1) stdout: /var/lib/ambari-agent/data/output-247.txt 2023-03-08 11:48:14,746 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1 2023-03-08 11:48:14,754 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2023-03-08 11:48:14,756 - Group['hdfs'] {} 2023-03-08 11:48:14,757 - Group['hadoop'] {} 2023-03-08 11:48:14,758 - Group['users'] {} 2023-03-08 11:48:14,758 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2023-03-08 11:48:14,759 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2023-03-08 11:48:14,760 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2023-03-08 11:48:14,761 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2023-03-08 11:48:14,762 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2023-03-08 11:48:14,763 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2023-03-08 11:48:14,773 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2023-03-08 11:48:14,774 - Group['hdfs'] {} 2023-03-08 11:48:14,775 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2023-03-08 11:48:14,775 - FS Type: HDFS 2023-03-08 11:48:14,776 - Directory['/etc/hadoop'] {'mode': 0755} 2023-03-08 11:48:14,776 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2023-03-08 11:48:14,797 - Repository['HDP-3.1-repo-2'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2023-03-08 11:48:14,810 - Repository['HDP-UTILS-1.1.0.22-repo-2'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2023-03-08 11:48:14,815 - Repository['HDP-3.1-GPL-repo-2'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.0.0', 'action': ['prepare'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2023-03-08 11:48:14,819 - Repository[None] {'action': ['create']} 2023-03-08 11:48:14,820 - File['/tmp/tmp1hcs68'] {'content': '[HDP-3.1-repo-2]\nname=HDP-3.1-repo-2\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-2]\nname=HDP-UTILS-1.1.0.22-repo-2\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.1-GPL-repo-2]\nname=HDP-3.1-GPL-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.0.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2023-03-08 11:48:14,821 - Writing File['/tmp/tmp1hcs68'] because contents don't match 2023-03-08 11:48:14,822 - File['/tmp/tmpEskDfZ'] {'content': StaticFile('/etc/yum.repos.d/ambari-hdp-2.repo')} 2023-03-08 11:48:14,823 - Writing File['/tmp/tmpEskDfZ'] because contents don't match 2023-03-08 11:48:14,823 - Rewriting /etc/yum.repos.d/ambari-hdp-2.repo since it has changed. 2023-03-08 11:48:14,824 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': StaticFile('/tmp/tmp1hcs68')} 2023-03-08 11:48:14,825 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because contents don't match 2023-03-08 11:48:14,826 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2023-03-08 11:48:14,963 - Skipping installation of existing package unzip 2023-03-08 11:48:14,964 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2023-03-08 11:48:14,974 - Skipping installation of existing package curl 2023-03-08 11:48:14,974 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2023-03-08 11:48:14,983 - Skipping installation of existing package hdp-select 2023-03-08 11:48:15,086 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {} 2023-03-08 11:48:15,126 - call returned (0, '') 2023-03-08 11:48:15,380 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2023-03-08 11:48:15,382 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1 2023-03-08 11:48:15,410 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2023-03-08 11:48:15,434 - Command repositories: HDP-3.1-repo-2, HDP-UTILS-1.1.0.22-repo-2, HDP-3.1-GPL-repo-2 2023-03-08 11:48:15,434 - Applicable repositories: HDP-3.1-repo-2, HDP-UTILS-1.1.0.22-repo-2, HDP-3.1-GPL-repo-2 2023-03-08 11:48:15,435 - Looking for matching packages in the following repositories: HDP-3.1-repo-2, HDP-UTILS-1.1.0.22-repo-2, HDP-3.1-GPL-repo-2 2023-03-08 11:48:15,671 - Command execution error: command = "/usr/bin/yum list available --showduplicates --disablerepo=* --enablerepo=HDP-3.1-repo-2", exit code = 1, stderr = One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=<repoid> ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> or subscription-manager repos --disable=<repoid> 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true Cannot find a valid baseurl for repo: HDP-3.1-repo-2 2023-03-08 11:48:17,303 - Command execution error: command = "/usr/bin/yum list available --showduplicates --disablerepo=* --enablerepo=HDP-UTILS-1.1.0.22-repo-2", exit code = 1, stderr = One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=<repoid> ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> or subscription-manager repos --disable=<repoid> 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true Cannot find a valid baseurl for repo: HDP-UTILS-1.1.0.22-repo-2 2023-03-08 11:48:20,895 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {} 2023-03-08 11:48:20,943 - call returned (0, '') 2023-03-08 11:48:20,944 - The 'hadoop-hdfs-datanode' component did not advertise a version. This may indicate a problem with the component packaging. Command failed after 1 tries
|