From f2f7ff34c5971809e4d0a070ce613d4ad2c41159 Mon Sep 17 00:00:00 2001 From: Alexander Rashed Date: Thu, 7 Oct 2021 08:46:47 +0200 Subject: [PATCH] reafactor code fences to use proper language or command shortcode --- .../en/docs/Integrations/architect/index.md | 11 +- content/en/docs/Integrations/pulumi/index.md | 6 +- .../spring-cloud-function/index.md | 2 +- .../en/docs/Integrations/terraform/index.md | 12 +- .../docs/Local AWS Services/cognito/index.md | 54 +++++---- .../Local AWS Services/elasticsearch/index.md | 1 - .../en/docs/Local AWS Services/glue/index.md | 2 +- .../Lambda Tools/debugging.md.bak | 114 ------------------ .../Lambda Tools/debugging/index.md | 18 +-- .../patched-sdks.md | 4 +- .../Understanding LocalStack/limitations.md | 2 +- 11 files changed, 57 insertions(+), 169 deletions(-) delete mode 100644 content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak diff --git a/content/en/docs/Integrations/architect/index.md b/content/en/docs/Integrations/architect/index.md index 829ed55286..072fd513cc 100644 --- a/content/en/docs/Integrations/architect/index.md +++ b/content/en/docs/Integrations/architect/index.md @@ -17,11 +17,12 @@ If you are adapting an existing configuration, you might be able to skip certain ## Example ### Setup -To use Architect in conjunction with Localstack, simply install the ```arclocal``` command (sources can be found [here](https://github.com/localstack/architect-local)). -``` -npm install -g architect-local @architect/architect aws-sdk -``` -The ``` arclocal``` command has the same usage as the ```arc``` command, so you can start right away. +To use Architect in conjunction with Localstack, simply install the `arclocal` command (sources can be found [here](https://github.com/localstack/architect-local)). +{{< command >}} +$ npm install -g architect-local @architect/architect aws-sdk +{{< /command >}} + +The `arclocal` command has the same usage as the `arc` command, so you can start right away. Create a test directory diff --git a/content/en/docs/Integrations/pulumi/index.md b/content/en/docs/Integrations/pulumi/index.md index a00cf151aa..ca97cefe0f 100644 --- a/content/en/docs/Integrations/pulumi/index.md +++ b/content/en/docs/Integrations/pulumi/index.md @@ -52,8 +52,8 @@ Installing dependencies... This will create the following directory structure. -```language - % tree -L 1 +{{< command >}} +$ tree -L 1 . ├── index.ts ├── node_modules @@ -62,7 +62,7 @@ This will create the following directory structure. ├── Pulumi.dev.yaml ├── Pulumi.yaml └── tsconfig.json -``` +{{< / command >}} Now edit your stack configuration `Pulumi.dev.yaml` as follows: diff --git a/content/en/docs/Integrations/spring-cloud-function/index.md b/content/en/docs/Integrations/spring-cloud-function/index.md index ed72825de8..7aaee5ffe3 100644 --- a/content/en/docs/Integrations/spring-cloud-function/index.md +++ b/content/en/docs/Integrations/spring-cloud-function/index.md @@ -245,7 +245,7 @@ Let's configure it to lookup our function Beans by HTTP method and path, create new `application.properties` file under `src/main/resources/application.properties` with the following content: -```properties +```env spring.main.banner-mode=off spring.cloud.function.definition=functionRouter spring.cloud.function.routing-expression=headers['httpMethod'].concat(' ').concat(headers['path']) diff --git a/content/en/docs/Integrations/terraform/index.md b/content/en/docs/Integrations/terraform/index.md index e67fbf7d3d..64d91759cd 100644 --- a/content/en/docs/Integrations/terraform/index.md +++ b/content/en/docs/Integrations/terraform/index.md @@ -34,7 +34,7 @@ The following changes go into this file. First, we have to specify mock credentials for the AWS provider: -``` +```hcl provider "aws" { access_key = "test" @@ -48,7 +48,7 @@ provider "aws" { Second, we need to avoid issues with routing and authentication (as we do not need it). Therefore we need to supply some general parameters: -``` +```hcl provider "aws" { access_key = "test" @@ -66,7 +66,7 @@ provider "aws" { Additionally, we have to point the individual services to LocalStack. In case of S3, this looks like the following snippet -``` +```hcl endpoints { s3 = "http://localhost:4566" } @@ -79,7 +79,7 @@ In case of S3, this looks like the following snippet ### S3 Bucket Now we are adding a minimal s3 bucket outside the provider -``` +```hcl resource "aws_s3_bucket" "test-bucket" { bucket = "my-bucket" } @@ -89,7 +89,7 @@ resource "aws_s3_bucket" "test-bucket" { ### Final Configuration The final (minimal) configuration to deploy an s3 bucket thus looks like this -``` +```hcl provider "aws" { access_key = "mock_access_key" @@ -128,7 +128,7 @@ $ terraform deploy Here is a configuration example with additional endpoints: -``` +```hcl provider "aws" { access_key = "test" secret_key = "test" diff --git a/content/en/docs/Local AWS Services/cognito/index.md b/content/en/docs/Local AWS Services/cognito/index.md index ed9f69ec06..c3b3d2ca08 100644 --- a/content/en/docs/Local AWS Services/cognito/index.md +++ b/content/en/docs/Local AWS Services/cognito/index.md @@ -17,7 +17,7 @@ LocalStack Pro contains basic support for authentication via Cognito. You can cr {{< /alert >}} First, start up LocalStack. In addition to the normal setup, we need to pass several SMTP settings as environment variables. -``` +```env SMTP_HOST= SMTP_USER= SMTP_PASS= @@ -28,12 +28,12 @@ Don't forget to pass Cognito as a service as well. ## Creating a User Pool Just as with aws, you can create a User Pool in LocalStack via -``` -awslocal cognito-idp create-user-pool --pool-name test -``` +{{< command >}} +$ awslocal cognito-idp create-user-pool --pool-name test +{{< /command >}} The response should look similar to this -``` +```json "UserPool": { "Id": "us-east-1_fd924693e9b04f549f989283123a29c2", "Name": "test", @@ -60,28 +60,31 @@ The response should look similar to this "AllowAdminCreateUserOnly": false }, "Arn": "arn:aws:cognito-idp:us-east-1:000000000000:userpool/us-east-1_fd924693e9b04f549f989283123a29c2" +} ``` -We will need the pool-id for further operations, so save it in a ```pool_id``` variable. +We will need the pool-id for further operations, so save it in a `pool_id` variable. Alternatively, you can also use a JSON processor like [jq](https://stedolan.github.io/jq/) to directly extract the necessary information when creating a pool. -``` -pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id") -``` + +{{< command >}} +$ pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id") +{{< /command >}} + ## Adding a Client Now we add a client to our newly created pool. We will also need the ID of the created client for the next step. The complete command for client creation with subsequent ID extraction is therefore -``` -client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId") -``` +{{< command >}} +$ client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId") +{{< /command >}} ## Signing up and confirming a user With these steps already taken, we can now sign up a user. -``` -awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value= -``` +{{< command >}} +$ awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value= +{{< /command >}} The response should look similar to this -``` +```json { "UserConfirmed": false, "UserSub": "5fdbe1d5-7901-4fee-9d1d-518103789c94" @@ -91,17 +94,17 @@ and you should have received a new e-mail! As you can see, our user is still unconfirmed. We can change this with the following instruction. -``` -awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code -``` +{{< command >}} +$ awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code +{{< /command >}} The verification code for the user is in the e-mail you received. Additionally, LocalStack prints out the verification code in the console. The above command doesn't return an answer, you need to check the pool to see that it was successful -``` -awslocal cognito-idp list-users --user-pool-id $pool_id -``` +{{< command >}} +$ awslocal cognito-idp list-users --user-pool-id $pool_id +{{< /command >}} which should return something similar to this -
+```json {hl_lines=[20]}
 {
     "Users": [
         {
@@ -121,12 +124,11 @@ which should return something similar to this
                 }
             ],
             "Enabled": true,
-            "UserStatus": "CONFIRMED"
+            "UserStatus": "CONFIRMED"
         }
     ]
 }
-
-
+``` ## OAuth Flows via Cognito Login Form diff --git a/content/en/docs/Local AWS Services/elasticsearch/index.md b/content/en/docs/Local AWS Services/elasticsearch/index.md index 92467165b5..a4d4b8c840 100644 --- a/content/en/docs/Local AWS Services/elasticsearch/index.md +++ b/content/en/docs/Local AWS Services/elasticsearch/index.md @@ -72,7 +72,6 @@ In the LocalStack log you will see something like 2021-10-01T21:14:27:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-stempel 2021-10-01T21:14:45:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-ukrainian 2021-10-01T21:15:01:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=59237 -E http.publish_port=59237 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/opt/code/localstack/localstack/infra/elasticsearch/data" -E path.repo="/tmp/localstack/es_backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/opt/code/localstack/localstack/infra/elasticsearch/tmp'} - ``` and after some time, you should see that the `Created` state of the domain is set to `true`: diff --git a/content/en/docs/Local AWS Services/glue/index.md b/content/en/docs/Local AWS Services/glue/index.md index 751f28f3a5..457e043cb9 100644 --- a/content/en/docs/Local AWS Services/glue/index.md +++ b/content/en/docs/Local AWS Services/glue/index.md @@ -68,7 +68,7 @@ For a more detailed example illustrating how to run a local Glue PySpark job, pl The Glue data catalog is integrated with Athena, and the database/table definitions can be imported via the `import-catalog-to-glue` API. Assume you are running the following Athena queries to create databases and table definitions: -``` +```sql CREATE DATABASE db2 CREATE EXTERNAL TABLE db2.table1 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table1' CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table2' diff --git a/content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak b/content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak deleted file mode 100644 index f52d1dd8ff..0000000000 --- a/content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: "Remote debugging" -date: 2021-09-27 -weight: 5 -description: > - Attach a debugger to your lambdas from your IDE. ---- - -| Complexity | ★☆☆☆☆ | -|--------------|-------------------| -| Time to read | 5 minutes | -| Edition | community/pro | -| Platform | any | - -## Covered Topics - -* [Debugging JVM lambdas](#debugging-jvm-lambdas) -* Debugging Node lambdas (under development) -* Debugging Python lambdas (under development) - -## Debugging JVM lambdas - -### Configuring LocalStack service - -1. Set `LAMBDA_JAVA_OPTS` with `jdwp` settings and expose the debug port -(you can use any other port of your choice): - -```yaml -#docker-compose.yml - -services: - localstack: - ... - environment: - ... - - LAMBDA_JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:5050 - - LAMBDA_DOCKER_FLAGS=-p 127.0.0.1:5050:5050 -``` - -### Configuring IntelliJ IDEA - -Open the `Run/Debug Configurations` window and create a new `Shell Script` with -the following content: - -{{< command >}} -$ while [[ -z $(docker ps | grep :5050) ]]; do sleep 1; done -{{< / command >}} - -![Run/Debug Configurations](../img-inteliji-debugger-1.png) - -This shell script should simplify the process a bit since the debugger server is not -immediately available (only once lambda container is up). - -Then create a new `Remote JVM Debug` configuration and use the script from -above as a `Before launch` target: - -![Run/Debug Configurations](../img-inteliji-debugger-2.png) - -Now to debug your lambda function, simply click on the `Debug` icon with -`Remote JVM on LS Debug` configuration selected, and then invoke your -lambda function. - -### Configuring Visual Studio Code - -Make sure you installed the following extensions: -* [Language Support for Java(TM) by Red Hat](https://marketplace.visualstudio.com/items?itemName=redhat.java) -* [Debugger for Java](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug) - -Add a new task by creating/modifying the `.vscode/tasks.json` file: - -```json -{ - "version": "2.0.0", - "tasks": [ - { - "label": "Wait Remote Debugger Server", - "type": "shell", - "command": "while [[ -z $(docker ps | grep :5050) ]]; do sleep 1; done; sleep 1;" - } - ] -} -``` - -Create a new `launch.json` file or edit an existing one from the `Run and Debug` tab, -then add the following configuration: - -```json -{ - "version": "0.2.0", - "configurations": [ - { - "type": "java", - "name": "Remote JVM on LS Debug", - "projectRoot": "${workspaceFolder}", - "request": "attach", - "hostName": "localhost", - "preLaunchTask": "Wait Remote Debugger Server", - "port": 5050 - } - ] -} -``` - -Now to debug your lambda function, click on the `Debug` icon with -`Remote JVM on LS Debug` configuration selected, and then invoke your -lambda function. - -## Debugging Node lambdas - -> The documentation is under development - -## Debugging Python lambdas - -> The documentation is under development diff --git a/content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md b/content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md index a17ec02296..35e8396d2e 100644 --- a/content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md +++ b/content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md @@ -38,11 +38,11 @@ There, the necessary code fragments for enabling debugging are already present. ### Configure LocalStack for remote Python debugging First, make sure that LocalStack is started with the following configuration (see the [Configuration docs]({{< ref "configuration#lambda" >}}) for more information): -```sh -LAMBDA_REMOTE_DOCKER=0 \ +{{< command >}} +$ LAMBDA_REMOTE_DOCKER=0 \ LAMBDA_DOCKER_FLAGS='-p 19891:19891' \ DEBUG=1 localstack start -``` +{{< /command >}} ### Preparing your code @@ -86,19 +86,19 @@ To create the Lambda function, you just need to take care of two things: So, in our [example](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-mounting-and-debugging), this would be: -```sh -awslocal lambda create-function --function-name my-cool-local-function \ +{{< command >}} +$ awslocal lambda create-function --function-name my-cool-local-function \ --code S3Bucket="__local__",S3Key="$(pwd)/" \ --handler handler.handler \ --runtime python3.8 \ --role cool-stacklifter -``` +{{< /command >}} We can quickly verify that it works by invoking it with a simple payload: -```sh -awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt -``` +{{< command >}} +$ awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt +{{< /command >}} ### Configuring Visual Studio Code for remote Python debugging diff --git a/content/en/docs/LocalStack Tools/transparent-execution-mode/patched-sdks.md b/content/en/docs/LocalStack Tools/transparent-execution-mode/patched-sdks.md index 7344b70798..cd67a8b8a9 100644 --- a/content/en/docs/LocalStack Tools/transparent-execution-mode/patched-sdks.md +++ b/content/en/docs/LocalStack Tools/transparent-execution-mode/patched-sdks.md @@ -37,9 +37,9 @@ The main advantage of this mode is, that no DNS magic is involved, and SSL certi ## Configuration -If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior using: +If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior by using: -``` +```bash TRANSPARENT_LOCAL_ENDPOINTS=0 ``` diff --git a/content/en/docs/Understanding LocalStack/limitations.md b/content/en/docs/Understanding LocalStack/limitations.md index b2b42f766e..64ece199e8 100644 --- a/content/en/docs/Understanding LocalStack/limitations.md +++ b/content/en/docs/Understanding LocalStack/limitations.md @@ -46,7 +46,7 @@ way you'll be installing packages for `x86_64` platform. What we will be doing now is installing Java and Python executables using Homebrew, it should automatically resolve packages to proper architecture versions. -```shell +```bash # Install Homebrew /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"