Feed aggregator

EDITIONS EBR

Tom Kyte - Thu, 2024-09-05 09:26
It took me a while pick database development over dabase administration, concerning a editions. My questions is simple, if I've decided to create an editioning view over a table with N triggres, what would I do about this?? some people says all triggers must be moved to the editioning view. Other says if there are trigger oriented to update auditing column (created by, date of creation, updated, etc) or update surrogated key columns (pre-insert), this kind of triggers must remain in the table. Kind regards Mauricio
Categories: DBA Blogs

A problem when ranking regression slopes over a database link

Tom Kyte - Thu, 2024-09-05 09:26
Dear Colleagues, I have come across a problem when ranking regression slopes over a database link. I have a table containing dates and file volumes for downloads from several sources. I want to make a Top N for the sources most growing in their volumes. For this, I calculate slopes for all the sources using regr_slope() function and then rank the slopes using row_number() over (order by slope desc nulls last) as rn_top. Then I want to select rows where rn_top <= :n. The results of the query obtained over a database link differ from the results obtained locally: locally obtained results are correct, remote ones demonstrate strange behavior. Regression is correct, ranking is correct; select * from (?ranking?) is NOT correct even without any ?where? condition ? the slopes are correct, but their ranking is not. This effect does not take place for constants, nor for max/min/avg functions ? only for regr_* functions. The effect is stable for two versions of remote servers: Oracle Database 19c Enterprise Edition and Oracle Database 11g Express Edition, and Oracle Database 19c Enterprise Edition wherefrom the database link goes. For reproducing: <code>CREATE TABLE "TEST_003" ("TASK_SOURCE_ID" NUMBER(6,0), "TASK_DATE_DOWNLOADED" DATE, "TASK_FILE_SIZE" VARCHAR2(128)) ; /* historically task_file_size is varchar in original table */ Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('26.08.24','DD.MM.RR'),'8266'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114657',to_date('26.08.24','DD.MM.RR'),'6925'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('26.08.24','DD.MM.RR'),'8783'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('26.08.24','DD.MM.RR'),'6590'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('26.08.24','DD.MM.RR'),'7204'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('122994',to_date('26.08.24','DD.MM.RR'),'59904'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('120116',to_date('27.08.24','DD.MM.RR'),'35125'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('27.08.24','DD.MM.RR'),'8226'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('27.08.24','DD.MM.RR'),'8784'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('27.08.24','DD.MM.RR'),'6591'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('27.08.24','DD.MM.RR'),'7206'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('28.08.24','DD.MM.RR'),'8230'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNL...
Categories: DBA Blogs

Extract a Node value from XML stored under BLOB in Oracle Tables has row 266 million rows

Tom Kyte - Thu, 2024-09-05 09:26
Hello Tom, I have a table which is having around 266573283 rows. In that table i have BLOB column which stores a xml data. In that column i need to parse the xml and then get the value of on node. This node may or may not exists in all the rows. I have tried using python (cx_oracle package). The table has been partitioned on the date(Month and Year) the rows has been stored. I'm not able to process the xml. Is there a way or an approach to process the xml data Note: This table will not grow anymore due to application shutdown. 2018 3543136 2019 3369956 2020 7576397 2021 82413536 2022 123216864 2023 46453394 Thanks & Regards, Mani R
Categories: DBA Blogs

Big PL/SQL block passes wrong parameters for function call

Tom Kyte - Thu, 2024-09-05 09:26
During executing a big PL/SQL blocks, some of the numeric parameters get lost/replaced with an earlier value. There is no error message, the PL/SQL block executes, but some of the function calls get the wrong parameter values. See the LiveSQL link: it defines a function which compares its input parameters, and calls it some 11 thousand times, and at the 10932th call, it gets mismatched values. The problem seemingly occurs above a certain program size, but it seemingly didn't reach the diana nodes limit, there is no ORA-00123, the program executes, but stores the wrong data in the database. Is there a size limit I bump into or can it possibly be an Oracle bug (that reproduces on multiple instances and different Oracle versions?)
Categories: DBA Blogs

How to connect SQL Developer using oracle wallet external password store

Tom Kyte - Wed, 2024-09-04 15:06
Hi, How can i connect to SQL developer using oracle wallet based security. from sql plus we can connect using the command like /@sid, but how can i connect to SQL developer (or) toad using the wallet based security. Thanks in advance.
Categories: DBA Blogs

Dynamic Sql

Tom Kyte - Wed, 2024-09-04 15:06
Hi, thank you for taking my question, I wanted to know the logic behind not being able to reference :new and :old variables in dynamic sql execute immediate 'insert into tablea (columna, columnb) values (:old.columna,:new.columnb)'; this will fail just trying to understand the logic behind it. thank you for you response PS constant browser on on your site loven it!
Categories: DBA Blogs

Object Type Variables in Bulk Collect and FORALL

Tom Kyte - Wed, 2024-09-04 15:06
I was reading the following discussion about using Object Types https://asktom.oracle.com/ords/f?p=100:11:607033327783906::::: https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/10_objs.htm But, I couldnt understand the object types usage in packages (mainly in the bulk processing). Is it possible to use objects in bulk processing? if yes, can you please help with an example?
Categories: DBA Blogs

Insert Into Large Tables Takes Longer Time

Tom Kyte - Wed, 2024-09-04 15:06
Looking for a recommendation related to large table inserts in Autonomous (ATP) database. We have a sizable history table in our database, roughly 45 GB, which stores all transactions at the end of each day. Recently, we have been experiencing performance challenges when inserting data into this table. We tried 2 approaches to fix the performance. (1) /*+ APPEND */ hint in direct SQL. (2) Bulk Collect and FORALL Both approaches works fine for smaller data sets. But, the data is really big like 2 Million records (in certain days of a week) the inserts are taking more than 5 mins to run. Since our calling application has 5 mins limit on SQL executions, our entire package is failing. Can you please help if we are missing anything in our approach or can try any other things for better performance? Also, should we consider partitioning the table for faster DML? If partitioning is something that we can consider, which field should we consider for partitioning (since the data in the table quite random, we thought of doing hash partition on line_id). Could you provide any suggestions? Furthermore, does the choice of field for hash partitioning carry any significance?
Categories: DBA Blogs

Oracle's RDBMS approach to sizing the database server ( versus MongoDB's approach )

Tom Kyte - Wed, 2024-09-04 15:06
Greetings, My question is in regards to Oracle's recommended process of sizing an Oracle database. When I have participated in sizing an Oracle server, we only estimated the table sizes using the field sizes multiplied by the total number of records. Then, we added 15% of the total table sizes for index space. Next, we ran a few simulated runs with test data. The focus was mostly to make sure that the server had enough disk space and total CPU didn't go over 50% for peak load. That was it. The steps I mentioned were provided by either Oracle support or our Oracle technical contact. Is there now an official Oracle whitepaper that states how an Oracle server should be sized? I recently been asked to help size a Mongo server. In this case, I do have access to a MongoDB server sizing whitepaper-like. It recommends that we focus on making sure that the entire "working set" fits in memory. The "working set" is the amount of space used up by both the data and its indexes needed for processing "frequently accessed data". The "frequently accessed data" is the data that is needed by normal processing. I still don't know how to accurately determine that size since it doesn't go into much details. Does this sound like a good way to size an Oracle database server, too? Thanks, John P.S. Here is Mongo's whitepaper-like approach: https://www.linkedin.com/pulse/mongodb-sizing-guide-sepp-renfer/
Categories: DBA Blogs

Inlining a function

Tom Kyte - Wed, 2024-09-04 15:06
Hello Tom, I am trying to inline a function when using a MERGE statement and I am not sure if it is working as no error is thrown. Here is my code: <code>CREATE OR REPLACE PACKAGE my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2; END my_pkg; / CREATE OR REPLACE PACKAGE BODY my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2 IS BEGIN RETURN p1 || '-' || p2; END my_fnc; END my_pkg; / CREATE OR REPACE PROCEURE my_prc IS TYPE t_type IS TABLE OF my_tab2%ROWTYPE; v_nt t_type; v_colx my_tab2.colx%TYPE; BEGIN -- version 1 PRAGMA INLINE (my_fnc, 'YES'); MERGE INTO my_tab1 a USING (SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 FROM my_tab2 WHERE 1 = 1) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); COMMIT; -- -- version 2 SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 BULK COLLECT INTO v_nt FROM my_tab2; FORALL i IN v_nt.FIRST .. v_nt.LAST PRAGMA INLINE (my_fnc, 'YES'); v_colx := my_pkg.my_fnc(col3, col4); MERGE INTO my_tab1 a USING (SELECT v_nt(i).col1 col1 , v_nt(i).col2 col2 , v_colx , v_nt(i).col5 col5 , v_nt(i).col6 col6 FROM dual) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); END my_prc;</code> Now, my questions are: can any version be inlinied? Version 1 could not be inlined because it is not preceding any of the statements mention in ORacle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-optimization-and-tuning.html#GUID-2E78813E-CF29-409D-9F8B-AA24B294BFA2) Version 1 could not be inlined because, also the invoked and invoking subprograms are not in the same program unit (it is not very clear what means "same program unit") Version 2 could be inlined because it is using the assignment statement? Version 2 could not be inlined because the invoked and invoking subprograms are not in the same program unit Thank you, Ionut Preda.
Categories: DBA Blogs

Oracle EXPDP commands

Learn DB Concepts with me... - Wed, 2024-09-04 14:25

The expdp (Data Pump Export) utility is used to export data and metadata from an Oracle database. It provides several command-line options to customize the export process. Below are some common expdp commands and their explanations:


Basic Syntax:

 


expdp [username/password] DIRECTORY=directory_name DUMPFILE=dumpfile_name LOGFILE=logfile_name [options]


Common Expdp Commands:

    • Export a Full Database:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log

    • FULL=Y: Exports the entire database.
    • Export a Specific Schema:

expdp system/password SCHEMAS=schema_name DIRECTORY=dpump_dir DUMPFILE=schema.dmp LOGFILE=schema.log
    • SCHEMAS=schema_name: Exports a specific schema.
    • Export Specific Tables:

expdp system/password TABLES=table1,table2 DIRECTORY=dpump_dir DUMPFILE=tables.dmp LOGFILE=tables.log
    • TABLES=table1,table2: Exports specific tables.
    • Export a Specific Table with Data and Metadata:

expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log
    • TABLES=table_name: Exports a specific table.
    • Export with Compression:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log COMPRESSION=ALL
    • COMPRESSION=ALL: Compresses all data during export.
    • Export with Data Filtering (e.g., Export Data from a Specific Date):

expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log QUERY=table_name:"WHERE created_date > TO_DATE('2024-01-01', 'YYYY-MM-DD')"
    • QUERY=table_name:"WHERE condition": Filters rows based on a condition.
    • Export Metadata Only:
    • expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=metadata.dmp LOGFILE=metadata.log CONTENT=METADATA_ONLY
    • CONTENT=METADATA_ONLY: Exports only metadata (no data).
    • Export Data Only:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=data_only.dmp LOGFILE=data_only.log CONTENT=DATA_ONLY
    • CONTENT=DATA_ONLY: Exports only data (no metadata).
    • Export a Database with a Specific Date Format:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log PARALLEL=4
    • PARALLEL=4: Uses 4 parallel threads for faster export.
  • Export with a Job Name:


expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log JOB_NAME=export_full_db
    • JOB_NAME=export_full_db: Assigns a name to the export job.


Additional Parameters:

  • CONTENT: Specifies whether to export metadata only (METADATA_ONLY), data only (DATA_ONLY), or both (ALL).
  • EXCLUDE: Excludes specific objects or object types from the export. Example: EXCLUDE=TABLE:"='table_name'".
  • INCLUDE: Includes specific objects or object types in the export. Example: INCLUDE=TABLE:"IN ('table1', 'table2')".
  • REMAP_SCHEMA: Remaps schema names. Example: REMAP_SCHEMA=old_schema:new_schema.
  • REMAP_TABLESPACE: Remaps tablespace names. Example: REMAP_TABLESPACE=old_tablespace:new_tablespace.


Directory Object:

Before running expdp, ensure that the DIRECTORY object exists in the database and points to a valid filesystem directory where the dump files will be written.

 

 

CREATE OR REPLACE DIRECTORY dpump_dir AS '/path/to/directory';


Example Execution:

To execute an expdp command, open a command prompt or terminal and run the appropriate expdp command based on your requirements. Ensure you have the necessary privileges and that the Oracle environment variables (ORACLE_HOME and PATH) are set correctly.



Conclusion:

The expdp utility offers powerful options for exporting data and metadata from Oracle databases. By using the appropriate parameters and options, you can tailor the export process to meet specific needs and optimize performance.

Categories: DBA Blogs

unified audit on execute on procedure does not record in audit trail when error ORA-06550, PLS-00306

Tom Kyte - Tue, 2024-09-03 20:46
Hi, I am helping developers to track their failed call to oracle procedures in java app, thought unified auditing can do that. so I create a unified audit policy on execute on the procedure, however, the audit only record the successful execution, but when the call fails with error ORA-06550 and PLS-00306, it's not recorded in the audit trail. here is example code to reproduce problem: <code>CREATE OR REPLACE PROCEDURE output_parameter(p_input VARCHAR2) AS BEGIN DBMS_OUTPUT.PUT_LINE('Input Parameter: ' || p_input); END; / CREATE AUDIT POLICY Jiulu_dropit3 ACTIONS EXECUTE ON jiulusun.output_parameter; AUDIT POLICY Jiulu_dropit3;</code> logout login -- successful execution <code>SQL> exec output_parameter('a'); Input Parameter: a</code> PL/SQL procedure successfully completed. -- failed execution <code>SQL> exec output_parameter('a','b'); BEGIN output_parameter('a','b'); END; * ERROR at line 1: ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'OUTPUT_PARAMETER' ORA-06550: line 1, column 7: PL/SQL: Statement ignored SELECT event_timestamp, os_username, userhost,dbusername, action_name, return_code,object_schema, object_name, sql_text FROM unified_audit_trail WHERE object_name='OUTPUT_PARAMETER' @pr ============================== EVENT_TIMESTAMP : 12-AUG-24 10.56.17.340044 AM OS_USERNAME : jiulusun USERHOST : my machine DBUSERNAME : JIULUSUN ACTION_NAME : EXECUTE RETURN_CODE : 0 OBJECT_SCHEMA : JIULUSUN OBJECT_NAME : OUTPUT_PARAMETER SQL_TEXT : BEGIN output_parameter('a'); END; PL/SQL procedure successfully completed.</code> so the failed execution is not in audit trail, Is this a bug or I mis-understand how unified auditing works?
Categories: DBA Blogs

Options for Change Data Capture

Tom Kyte - Tue, 2024-09-03 02:46
Chris & Connor, We are working on a new project and were wondering as to what options are currently available / provided by Oracle for Change Data Capture, besides the DIY MViews ( MVIEW logs mostly ), triggers etc, does Oracle 11.2.0.4 and above provide an out of the box solution ? I have heard that Oracle CDC is being deprecated, is there some option available that will track all the data changes made to a table ? Thanks BC MT, MI
Categories: DBA Blogs

Prevent future dates in for inspection_date column in table

Tom Kyte - Sun, 2024-09-01 14:26
I have a table called DAY_INSP and a column INS_DATE which allows dates in the future. I want to prevent that by only allowing today's date or older. Do I need to add a trigger or constraint to do this? Any help would be appreciated
Categories: DBA Blogs

Video on DataGuard Switchover -- with RAC and Single Instance

Hemant K Chitale - Sun, 2024-09-01 04:36

 I've posted a demonstration of DataGuard Switchover, using RAC and Single Instance as the Primary/Standby pair.


Fundamentally, there is no difference if either or both of the databases are RAC or Single Instance.


A Switchover is a Graceful operation, with No-Data-Loss as the Primary sends the "End-Of-Redo Marker" to the Standby at which point Reversal of Roles happens.  Therefore, you can execute Switchover between the two databases (servers / data centres / sites) multiple times without loss of data.


A Failover, on the other hand, involves data loss and the erstwhile Primary does not revert to a Standby role but must be recreated / refreshed as a Standby from the new Primary.

Categories: DBA Blogs

Getting inspiration from daily use (or not)

Tim Hall - Sun, 2024-09-01 03:11

It’s been an odd few months on the content creation front. When Oracle Database 23c/23ai Free was released I was no longer under NDA for the shipped functionality, and I pushed out a glut of version 23 content here. Since then I’ve been waiting for the full on-prem release so I can publish the remaining … Continue reading "Getting inspiration from daily use (or not)"

The post Getting inspiration from daily use (or not) first appeared on The ORACLE-BASE Blog.Getting inspiration from daily use (or not) was first posted on September 1, 2024 at 9:11 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Manage Feathers.js authentication in Swagger UI

Yann Neuhaus - Fri, 2024-08-30 09:16

In addition to my previous articles Add a UI to explore the Feathers.js API and Add authentication in a Feathers.js REST API, today, I will explain how to manage Feathers.js authentication in Swagger UI and make it simple for our users.

After enabling authentication on API methods, as a result, using Swagger UI for testing is difficult, if not impossible. To solve this problem, Swagger needs to support authentication.

In this article, I will reuse the code from my previous articles.

First step: add authentication specifications to the application

For Swagger UI to handle API authentication, I need to update the specifications in the app.ts file. Therefore, in the specs object, I add the components and security parts:

app.configure(
  swagger({
    specs: {
      info: {
        title: 'Workshop API ',
        description: 'Workshop API rest services',
        version: '1.0.0'
      },
      components: {
        securitySchemes: {
          BearerAuth: {
            type: 'http',
            scheme: 'bearer',
          },
        },
      },
      security: [{ BearerAuth: [] }],
    },
    ui: swagger.swaggerUI({ 
      docsPath: '/docs',
    })
  })
)

This new configuration, general to the API, tells Swagger that authenticated methods will use a Bearer token.

Second step: define documentation for authentication methods

The authentication service generated by the CLI does not contain any specifications for Swagger UI, which results in an error message being displayed on the interface. So I’m adding the specifications needed by Swagger to manage authentication methods:


export const authentication = (app: Application) => {
  const authentication = new AuthenticationService(app)

  authentication.register('jwt', new JWTStrategy())
  authentication.register('local', new LocalStrategy())

  // Swagger definition.
  authentication.docs = {
    idNames: {
      remove: 'accessToken',
    },
    idType: 'string',
    securities: ['remove', 'removeMulti'],
    multi: ['remove'],
    schemas: {
      authRequest: {
        type: 'object',
        properties: {
          strategy: { type: 'string' },
          email: { type: 'string' },
          password: { type: 'string' },
        },
      },
      authResult: {
        type: 'object',
        properties: {
          accessToken: { type: 'string' },
          authentication: {
            type: 'object',
            properties: {
              strategy: { type: 'string' },
            },
          },
          payload: {
            type: 'object',
            properties: {},
          },
          user: { $ref: '#/components/schemas/User' },
        },
      },
    },
    refs: {
      createRequest: 'authRequest',
      createResponse: 'authResult',
      removeResponse: 'authResult',
      removeMultiResponse: 'authResult',
    },
    operations: {
      remove: {
        description: 'Logout the current user',
        'parameters[0].description': 'accessToken of the current user',
      },
      removeMulti: {
        description: 'Logout the current user',
        parameters: [],
      },
    },
  };

  app.use('authentication', authentication)
}

The authentication.docs block defines how Swagger UI can interact with the authentication service.

Third step: add the security config into the service

At the beginning of my service file (workshop.ts), in the docs definition, I add the list of methods that must be authenticated in the securities array:

docs: createSwaggerServiceOptions({
  schemas: { 
    workshopSchema, 
    workshopDataSchema, 
    workshopPatchSchema, 
    workshopQuerySchema 
  },
  docs: {
      // any options for service.docs can be added here
      description: 'Workshop service',
      securities: ['find', 'get', 'create', 'update', 'patch', 'remove'],
  }
Swagger UI authentication tests

As a result of the configuration, new elements appear on the interface:

  • The Authorize button at the top of the page
  • Padlocks at the end of authenticated method lines
  • A more complete definition of authentication service methods
Swagger Authentication Obtain an access token with the UI

First, I need to log in with my credentials to get a token. After that, the bearer token will be used in the next authenticated calls:

Authentication request

The JSON object sent in the request body contains the identification information. In return, we’ll get an accessToken useful for the next calls:

Authentication response Second step, define the bearer token for authenticated requests

Each call to an authenticated method requires the token to be sent in the http headers. We therefore register the token in Swagger UI by clicking on the Authorize button and paste the token obtained during the authentication step:

Define authentication credentials Testing authenticated methods

Once the token has been entered into Swagger UI, I use the methods without worrying about authentication:

Call authenticated method

Swagger UI sends the token in the http headers to authenticate the request. Then the api returns the data:

Authenticated response Conclusion

Managing Feathers.js authentication in Swagger UI is entirely possible. With a little extra configuration, it becomes easy to use authentication and authenticated methods. It’s even simple to test all API methods and document their use.

L’article Manage Feathers.js authentication in Swagger UI est apparu en premier sur dbi Blog.

DBMS_XMLGEN limited workaround for converting LONG columns into CLOBs

Flavio Casetta - Fri, 2024-08-30 03:12

If you landed here is because you hit the problem of dealing with LONG columns.

There are some good articles elsewhere about how to tackle this old problem, my contribution in this case consists in advising about some limitations that apparently have been overlooked when using the same technique explained below.

It's a mystery to me why after so many years we can't rid of this annoyance once for good, why not "simply" adding a CLOB column equivalent at least in the case of data dictionary columns?
Come on!

I needed to extract the content of the TEXT column from DBA_VIEWS and DBA_MVIEWS, possibly without having to pass through an INSERT into a table (using function TO_LOB), which is the best workaround in case you deal with static data, for one-off operations.

I stumbled upon an old video of Connor McDonald showing how to extract the content of a LONG column exploiting the XML API DBMS_XMLGEN.GETXMLTYPE. This trick seemed to save the day after some adaptation for my case, and actually I was almost ready to celebrate when I started hitting some errors while doing further tests.

To cut a long story short, eventually I encountered the following problems:

  1. API documentation for version 19c of DBMS_XMLGEN.SETCONVERTSPECIALCHARS is incorrect as it mentions a parameter "conv" but the real parameter name is "replace". This typo is still present in the latest version of the documentation of 23ai.

  2. DBMS_XMLGEN.GETXMLTYPE and DBMS_XMLGEN.GETXML won't perform special characters escaping via DBMS_XMLGEN.SETCONVERTSPECIALCHARS if the column type is LONG.
    I was getting parsing errors when using Connor's EXTRACTVALUE technique because the XML document contained < or > as spare characters in the source (as in WHERE conditions inside the query source).

  3.  DBMS_XMLGEN.GETXMLTYPE and DBMS_XMLGEN.GETXML will truncate the content to the first 32K for LONG columns.

Problem #1 was easily solved, problem #2 was solved extracting the data using REGEXP_SUBSTR instead of EXTRACTVALUE, but this was possible because I was working on a XML document containing a single ROW tag at a time. For multiple rows this solution will not work.

  FUNCTION long2clob
     ( p_qry in clob, -- must return a single row!
       p_col in varchar2)
  RETURN CLOB
  IS
    c        CLOB;
  BEGIN
    c := regexp_substr(
           dbms_xmlgen.getxml(p_qry),
           '(<ROW>.*<' || p_col || '>(.*)</' || p_col || '>.*</ROW>)',
           1,
           1,
           'cn'
           ,2
         );
    return c;

  END long2clob;

Problem #3 remains, unless LONG columns are less than 32K.
Unfortunately we do have some views exceeding 32K of source, but considering the usage of this function I'll probably live with this limitation for the moment.
By the way, SQLDeveloper won't allow you to edit a view larger than 32K, and to me this sounds like an invitation to avoid such situations.

Finally, I also tried to see what happens when you supply a LONG column to function JSON_OBJECT, unfortunately it returns the exception:

ORA-40654: Input to JSON generation function has unsupported data type.

That's all folks!
(quote)

Categories: DBA Blogs

Oracle mainframe instances

Tom Kyte - Thu, 2024-08-29 13:26
G'Day Ask Tom, I have been asked to look into curating an IBM ZOS mainframe interactive blog community on LinkedIN as an Oracle expert. It occurred to me to ask what the installed base on IBM mainframe might be, roughly? It seems that there is a conflict of interest here with IBM's own commercial database offering DB2 so it is an open question for me. If Oracle RDBMS on mainframe is a rarity a blog may not return in value the effort expended. Thank you for your time, Best regards, Mike
Categories: DBA Blogs

Exposing APEX app on the Internet running on internal on-premise ORDS for supplier data from EBS

Tom Kyte - Thu, 2024-08-29 13:26
**Use Case:** We have built a supplier portal app in APEX and now would like to give access to it to our suppliers. **Best Practices for Exposing APEX Apps on the Internet:** When exposing APEX apps on the WWW/Internet that use an on-premises EBS instance/Oracle database as a source database, it is essential to follow best practices to ensure security and performance. Below are some key considerations are accounted for: 1. **New Domain Name and DNS Record:** - We have registered a new domain name and created a DNS record to access the app, e.g., supplier.XXXXXX.com. 2. **SSL and Domain Protection:** - We have added SSL and full domain protection with the domain hosting service to ensure secure communication. 3. **Firewall and NAT Configuration:** - We have configured firewall rules and NAT records to route traffic through internal reverse proxy servers. 4. **APEX Authentication Schema:** - We have implemented APEX authentication using Microsoft Active Directory (AD) and Azure for secure user authentication. 5. **User Authorization:** - We have set up customer user authorization to control access to the APEX app. **Additional Considerations:** We are also considering implementing a valid external IP addresses list on our firewall to restrict access to the domain supplier.XXXXX.com based on known external IPs from our suppliers. Here are the pros and cons of this approach: **Pros:** - **Enhanced Security:** Restricting access to known IP addresses reduces the risk of unauthorized access. - **Control:** Provides better control over who can access the application. **Cons:** - **Maintenance:** Managing and updating the list of valid IP addresses can be time-consuming. - **Accessibility:** Legitimate users with dynamic IP addresses may face access issues. Thank you in advance for any additional thoughts, guidelines, or best practices.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator