代码之家  ›  专栏  ›  技术社区  ›  mifan

c#客户端生成的azure容器的SAS令牌错误

  •  0
  • mifan  · 技术社区  · 2 年前

    我有一个Spark应用程序,我想通过将事件日志写入Blob容器来访问Azure Blob容器。

    我想使用SAS令牌进行身份验证。Azure门户生成的SAS令牌工作正常。但是,C#客户端生成的那个不起作用。我不知道这两个SAS代币之间有什么区别。

    这就是我在Azure门户中生成SAS令牌的方式

    enter image description here

    这是我的火花会议

        spark.eventLog.dir: "abfss://[email protected]/log"
        spark.hadoop.fs.azure.account.auth.type.lydevstorage0.dfs.core.windows.net: "SAS"
        spark.hadoop.fs.azure.sas.fixed.token.lydevstorage0.dfs.core.windows.net: ""
        spark.hadoop.fs.azure.sas.token.provider.type.lydevstorage0.dfs.core.windows.net: "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider"
    

    这是C#代码:

                BlobSasBuilder blobSasBuilder = new BlobSasBuilder()
                {
                    StartsOn = DateTimeOffset.UtcNow.AddDays(-1),
                    ExpiresOn = DateTimeOffset.UtcNow.AddDays(1),
                    Protocol = SasProtocol.HttpsAndHttp,
                    BlobContainerName = "sparkevent",
                    Resource = "b" // I also tried "c"
                };
                blobSasBuilder.SetPermissions(BlobContainerSasPermissions.All);
    
                string sasToken2 = blobSasBuilder.ToSasQueryParameters(new StorageSharedKeyCredential("lydevstorage0", <access key>)).ToString();
    

    错误是

    Exception in thread "main" java.nio.file.AccessDeniedException: Operation failed: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.", 403, HEAD, https://lydevstorage0.dfs.core.windows.net/sparkevent/?upn=false&action=getAccessControl&ti
    meout=90&sv=2021-02-12&spr=https,http&st=2023-06-26T03:33:27Z&se=2023-06-28T03:33:27Z&sr=c&sp=racwdxlti&sig=XXXXX
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1384)
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:611)
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:599)
            at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:77)
            at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:221)
            at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:83)
            at org.apache.spark.SparkContext.<init>(SparkContext.scala:612)
            at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2704)
            at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
            at scala.Option.getOrElse(Option.scala:189)
            at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
            at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
            at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.base/java.lang.reflect.Method.invoke(Method.java:566)
            at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
            at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
            at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
            at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
            at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
            at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: Operation failed: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.", 403, HEAD, https://lydevstorage0.dfs.core.windows.net/sparkevent/?upn=false&action=getAccessControl&timeout=90&sv=2021-02-12&spr=https,http&st=2023-06-26T0
    3:33:27Z&se=2023-06-28T03:33:27Z&sr=c&sp=racwdxlti&sig=XXXXX
            at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:231)
            at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:191)
            at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:464)
            at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:189)
            at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:911)
            at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:892)
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:358)
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:932)
            at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:609)
            ... 23 more
    

    我尝试了在Azure门户中生成的SAS令牌,效果很好。

    0 回复  |  直到 2 年前
        1
  •  1
  •   Venkatesan    2 年前

    如果您正在使用 Data-lake-gen2 account 对于分层命名空间,可以使用 Datalake 使用以下代码打包以使用C#创建SAS令牌。

    代码:

    using Azure.Storage;
    using Azure.Storage.Files.DataLake;
    using Azure.Storage.Sas;
    
    namespace SAStoken
    {
        class Program
        {
            private static void Main()
            {
                var AccountName = "venkat098";
                var AccountKey = "";
                var FileSystemName = "filesystem1";
                StorageSharedKeyCredential key = new StorageSharedKeyCredential(AccountName, AccountKey);
                string dfsUri = "https://" + AccountName + ".dfs.core.windows.net";
                var dataLakeServiceClient = new DataLakeServiceClient(new Uri(dfsUri), key);
                var directoryclient = dataLakeServiceClient.GetFileSystemClient(FileSystemName);
                DataLakeSasBuilder sas = new DataLakeSasBuilder()
                {
                    FileSystemName = FileSystemName,//container name
                    Resource = "d",
                    IsDirectory = true,
                    ExpiresOn = DateTimeOffset.UtcNow.AddDays(7),
                    Protocol = SasProtocol.HttpsAndHttp,
                };
                sas.SetPermissions(DataLakeAccountSasPermissions.All);
                Uri sasUri = directoryclient.GenerateSasUri(sas);
                Console.WriteLine(sasUri);
            }
    
        }
    }
    

    输出:

    https://venkat098.dfs.core.windows.net/filesystem1?sv=2022-11-02&spr=https,http&se=2023-07-04T05%3A53%3A39Z&sr=c&sp=racwdl&sig=xxxxxx
    

    enter image description here

    我用图像文件检查了URL,它工作成功。

    https://venkat098.dfs.core.windows.net/filesystem1/cell_division.jpeg?sv=2022-11-02&spr=https,http&se=2023-07-04T05%3A53%3A39Z&sr=c&sp=racwdl&sig=xxxxx
    

    浏览器: enter image description here

    参考:

    Use .NET to manage data in Azure Data Lake Storage Gen2 - Azure Storage | Microsoft Learn

        2
  •  0
  •   mifan    2 年前

    根本原因是我的Spark程序无法获取或设置AccessControl。 我不应该使用BlobAsToken或AccountSasBuilder,因为blob容器本身不知道ACL是什么。因此,它们生成的SAS令牌自然没有ACL操作权限。

    在@Venkatesan的帮助下,我学会了我也可以使用 DataLakeSasBuilder DataLake遵循HDFS标准,因此它知道ACL是什么。然而,@Venkatesan使用的权限集是 DataLakeAccountSasPermissions ,不包括ManageAccessControl权限。正确的权限集是 DataLakeFileSystemSasPermissions 。切换到此权限集后,我的程序可以正常工作。