Mudlet 2.1 Database Problem

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

I get most of my info from here: http://wiki.mudlet.org/w/Manual:Lua_Fun ... _Functions
And that is where it says to use just the field names.

User avatar
Vadi
Posts: 5042
Joined: Sat Mar 14, 2009 3:13 pm

Re: Mudlet 2.1 Database Problem

Post by Vadi »

Thanks, I've corrected that. And the source code comments, which is where that was taken from...

I've added this example to the unit testing as well: https://github.com/vadi2/mudlet-lua/blo ... B.lua#L152

User avatar
Vadi
Posts: 5042
Joined: Sat Mar 14, 2009 3:13 pm

Re: Mudlet 2.1 Database Problem

Post by Vadi »

I've also updated the documentation on the fact of db:create() and adding new fields.

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

So by messing around with the db:_migrate function, I managed to tweak it so that it still creates new sheets, and also adds new columns to existing sheets. I'm not really sure if it is all kosher, as I know absolutely nothing about SQL, but as far as I can tell it seems to be okay. I scavenged some stuff from db:fetch_sql and then threw out the part db:_coerce_sheet part since all we needed was the names of current columns. I have the part I hacked in outlined, everything else has been left alone.
Code: [show] | [select all] lua
function db:_migrate(db_name, s_name)
   local conn = db.__conn[db_name]
   local schema = db.__schema[db_name][s_name]

   local current_columns = {}
	
--   -- The PRAGMA table_info command is a query which returns all of the columns currently
--   -- defined in the specified table. The purpose of this section is to see if any new columns
--   -- have been added.
--   local cur = conn:execute("PRAGMA table_info('"..s_name.."')") -- currently broken - LuaSQL bug, needs to be upgraded for new sqlite API
--
--   if cur ~= 0 then
--      local row = cur:fetch({}, "a")
--
--      while row do
--         current_columns[row.name] = row.type
--         row = cur:fetch({}, "a")
--      end
--      cur:close()
--   end

---------------  GETS ALL COLUMNS FROM SHEET IF IT EXISTS ----- ONLY WORKS FOR SHEETS WITH DATA IN THEM
	db:echo_sql("SELECT * FROM "..s_name)
	local cur = conn:execute("SELECT * FROM "..s_name)

	if cur and cur ~= 0 then
		local row = cur:fetch({}, "a")
		cur:close()
		if row then
			for k,v in pairs(row) do
				current_columns[k] = ""
			end
		end
	end
---------------

   -- The SQL definition of a column is:
   --    "column_name" column_type NULL
   -- The db module does not presently support columns that are required. Everything is optional,
   -- everything may be NULL / nil.
   -- If you specify a column's type, you also specify its default value.
   local sql_column = ', "%s" %s NULL'
   local sql_column_default = sql_column..' DEFAULT %s'

   if table.is_empty(current_columns) then
      -- At this point, we know that the specified table does not exist in the database and so we
      -- should create it.

      -- Every sheet has an implicit _row_id column. It is not presently (and likely never will be)
      -- supported to define the primary key of any sheet.
      local sql_chunks = {"CREATE TABLE ", s_name,  '("_row_id" INTEGER PRIMARY KEY AUTOINCREMENT'}

      -- We iterate over every defined column, and add a line which creates it.
      for key, value in pairs(schema.columns) do
         local sql = ""
         if value == nil then
            sql = sql_column:format(key, db:_sql_type(value))
         else
            sql = sql_column_default:format(key, db:_sql_type(value), db:_sql_convert(value))
         end
         if (type(schema.options._unique) == "table" and table.contains(schema.options._unique, key))
            or (type(schema.options._unique) == "string" and schema.options._unique == key) then
            sql = sql .. " UNIQUE"
         end
         sql_chunks[#sql_chunks+1] = sql
      end

      sql_chunks[#sql_chunks+1] = ")"

      local sql = table.concat(sql_chunks, "")
      db:echo_sql(sql)
      conn:execute(sql)

   else
      -- At this point we know that the sheet already exists, but we are concerned if the current
      -- definition includes columns which may be added.
      local sql_chunks = {}
      local sql_add = 'ALTER TABLE %s ADD COLUMN "%s" %s NULL DEFAULT %s'

      for k, v in pairs(schema.columns) do
         t = db:_sql_type(v)
         v = db:_sql_convert(v)

         -- Here we test it a given column exists in the sheet already, and if not, we add that
         -- column.
         if not current_columns[k] then
			display(k)
            local sql = sql_add:format(s_name, k, t, v)
            conn:execute(sql)
            db:echo_sql(sql)
         end
      end
   end

   -- On every invocation of db:create we run the code that creates indexes, as that code will
   -- do nothing if the specific indexes already exist. This is enforced by the db:_index_name
   -- function creating a unique index.
   --
   -- Note that in no situation will an existing index be deleted.

   -- make up current_columns, as pragma_info currently does not populate it, due to luasql bug
   for key, value in pairs(schema.columns) do
      current_columns[key] =  db:_sql_type(value)
   end

   db:_migrate_indexes(conn, s_name, schema, current_columns)
   db:echo_sql("COMMIT")
   conn:commit()
   conn:execute("VACUUM")
end
So, I adjusted it a bit to fix a few problems it was having. It still only works for sheets that have data in them though, so I am thinking of coming up with a way of adding a dummy row of data and then deleting it.

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

I finished messing with the db:_migrate function. It now successfully adds columns to existing sheets, whether they have any data in them or not. To do it, I first just select the sheet and try to grab a row. If there is one, I use the columns in it as the current columns. If there isn't, I add a new row (with a fixed value in the first string type column found in the schema), grab it and use the data from it to determine what columns are in the sheet, then delete the row I just added. It then continues as it did before, only this time the current_columns table is correctly populated.
Code: [show] | [select all] lua
function db:_migrate(db_name, s_name)
   local conn = db.__conn[db_name]
   local schema = db.__schema[db_name][s_name]

   local current_columns = {}
	
--   -- The PRAGMA table_info command is a query which returns all of the columns currently
--   -- defined in the specified table. The purpose of this section is to see if any new columns
--   -- have been added.
--   local cur = conn:execute("PRAGMA table_info('"..s_name.."')") -- currently broken - LuaSQL bug, needs to be upgraded for new sqlite API
--
--   if cur ~= 0 then
--      local row = cur:fetch({}, "a")
--
--      while row do
--         current_columns[row.name] = row.type
--         row = cur:fetch({}, "a")
--      end
--      cur:close()
--   end

---------------  GETS ALL COLUMNS FROM SHEET IF IT EXISTS
	db:echo_sql("SELECT * FROM "..s_name)
	local cur = conn:execute("SELECT * FROM "..s_name)  -- select the sheet

	if cur and cur ~= 0 then
		local row = cur:fetch({}, "a") -- grab the first row, if any
		if not row then -- if no first row then
			local col = ""
			for k,v in pairs(schema.columns) do -- look through sheet schema to find the first column that is text
				if type(k) == "number" then
					if string.sub(v,1,1) ~= "_" then col = v break end
				else
					if string.sub(k,1,1) ~= "_" and type(v) == "string" then col = k break end
				end
			end
			db:add({_db_name = db_name, _sht_name = s_name},{[col] = "test"}) -- add row with found column set as "test"
			cur = conn:execute("SELECT * FROM "..s_name) -- select the sheet
			row = cur:fetch({}, "a") -- grab the newly created row
			-- delete the newly created row
			db:delete({_db_name = db_name, _sht_name = s_name},db:eq({database = db_name, sheet = s_name, name = col, type = "string"},"test"))
			--display(row)
		end
		cur:close()
		if row then -- add each column from row to current_columns table
			for k,v in pairs(row) do
				current_columns[k] = ""
			end			
		end
	end
---------------

   -- The SQL definition of a column is:
   --    "column_name" column_type NULL
   -- The db module does not presently support columns that are required. Everything is optional,
   -- everything may be NULL / nil.
   -- If you specify a column's type, you also specify its default value.
   local sql_column = ', "%s" %s NULL'
   local sql_column_default = sql_column..' DEFAULT %s'

   if table.is_empty(current_columns) then
      -- At this point, we know that the specified table does not exist in the database and so we
      -- should create it.

      -- Every sheet has an implicit _row_id column. It is not presently (and likely never will be)
      -- supported to define the primary key of any sheet.
      local sql_chunks = {"CREATE TABLE ", s_name,  '("_row_id" INTEGER PRIMARY KEY AUTOINCREMENT'}

      -- We iterate over every defined column, and add a line which creates it.
      for key, value in pairs(schema.columns) do
         local sql = ""
         if value == nil then
            sql = sql_column:format(key, db:_sql_type(value))
         else
            sql = sql_column_default:format(key, db:_sql_type(value), db:_sql_convert(value))
         end
         if (type(schema.options._unique) == "table" and table.contains(schema.options._unique, key))
            or (type(schema.options._unique) == "string" and schema.options._unique == key) then
            sql = sql .. " UNIQUE"
         end
         sql_chunks[#sql_chunks+1] = sql
      end

      sql_chunks[#sql_chunks+1] = ")"

      local sql = table.concat(sql_chunks, "")
      db:echo_sql(sql)
      conn:execute(sql)

   else
      -- At this point we know that the sheet already exists, but we are concerned if the current
      -- definition includes columns which may be added.
      local sql_chunks = {}
      local sql_add = 'ALTER TABLE %s ADD COLUMN "%s" %s NULL DEFAULT %s'

      for k, v in pairs(schema.columns) do
         t = db:_sql_type(v)
         v = db:_sql_convert(v)

         -- Here we test it a given column exists in the sheet already, and if not, we add that
         -- column.
         if not current_columns[k] then
--			display("Adding: " .. k)
            local sql = sql_add:format(s_name, k, t, v)
            conn:execute(sql)
            db:echo_sql(sql)
         end
      end
   end

   -- On every invocation of db:create we run the code that creates indexes, as that code will
   -- do nothing if the specific indexes already exist. This is enforced by the db:_index_name
   -- function creating a unique index.
   --
   -- Note that in no situation will an existing index be deleted.

   -- make up current_columns, as pragma_info currently does not populate it, due to luasql bug
   for key, value in pairs(schema.columns) do
      current_columns[key] =  db:_sql_type(value)
   end

   db:_migrate_indexes(conn, s_name, schema, current_columns)
   db:echo_sql("COMMIT")
   conn:commit()
   conn:execute("VACUUM")
end

User avatar
Vadi
Posts: 5042
Joined: Sat Mar 14, 2009 3:13 pm

Re: Mudlet 2.1 Database Problem

Post by Vadi »

Could you make it so if the pragma works, it'll use that data instead? As a safeguard for when a LuaSQL with working pragmas is available on the platform.

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

Sure, here you go. As is, the only condition I can think of that may present problems here would be if the original schema for a sheet had no string type columns, and then more columns were added. But, even if that case fails, this should still be a big step up in functionality.
Code: [show] | [select all] lua
--db.debug_sql = true
--db.debug_sql = false
function db:_migrate(db_name, s_name)
   local conn = db.__conn[db_name]
   local schema = db.__schema[db_name][s_name]

   local current_columns = {}
	
   -- The PRAGMA table_info command is a query which returns all of the columns currently
   -- defined in the specified table. The purpose of this section is to see if any new columns
   -- have been added.
   local cur = conn:execute("PRAGMA table_info('"..s_name.."')") -- currently broken - LuaSQL bug, needs to be upgraded for new sqlite API

	if cur ~= 0 then
		local row = cur:fetch({}, "a")
		if row then
			while row do
				current_columns[row.name] = row.type
				row = cur:fetch({}, "a")
			end
			cur:close()
		else
			---------------  GETS ALL COLUMNS FROM SHEET IF IT EXISTS
			db:echo_sql("SELECT * FROM "..s_name)
			cur = conn:execute("SELECT * FROM "..s_name)  -- select the sheet
		
			if cur and cur ~= 0 then
				local row = cur:fetch({}, "a") -- grab the first row, if any
				if not row then -- if no first row then
					local col = ""
					for k,v in pairs(schema.columns) do -- look through sheet schema to find the first column that is text
						if type(k) == "number" then
							if string.sub(v,1,1) ~= "_" then col = v break end
						else
							if string.sub(k,1,1) ~= "_" and type(v) == "string" then col = k break end
						end
					end
					db:add({_db_name = db_name, _sht_name = s_name},{[col] = "test"}) -- add row with found column set as "test"
					cur = conn:execute("SELECT * FROM "..s_name) -- select the sheet
					row = cur:fetch({}, "a") -- grab the newly created row
					-- delete the newly created row
					db:delete({_db_name = db_name, _sht_name = s_name},db:eq({database = db_name, sheet = s_name, name = col, type = "string"},"test"))
					--display(row)
				end
				cur:close()
				if row then -- add each column from row to current_columns table
					for k,v in pairs(row) do
						current_columns[k] = ""
					end			
				end
			end
		end
   end
		
   -- The SQL definition of a column is:
   --    "column_name" column_type NULL
   -- The db module does not presently support columns that are required. Everything is optional,
   -- everything may be NULL / nil.
   -- If you specify a column's type, you also specify its default value.
   local sql_column = ', "%s" %s NULL'
   local sql_column_default = sql_column..' DEFAULT %s'

   if table.is_empty(current_columns) then
      -- At this point, we know that the specified table does not exist in the database and so we
      -- should create it.

      -- Every sheet has an implicit _row_id column. It is not presently (and likely never will be)
      -- supported to define the primary key of any sheet.
      local sql_chunks = {"CREATE TABLE ", s_name,  '("_row_id" INTEGER PRIMARY KEY AUTOINCREMENT'}

      -- We iterate over every defined column, and add a line which creates it.
      for key, value in pairs(schema.columns) do
         local sql = ""
         if value == nil then
            sql = sql_column:format(key, db:_sql_type(value))
         else
            sql = sql_column_default:format(key, db:_sql_type(value), db:_sql_convert(value))
         end
         if (type(schema.options._unique) == "table" and table.contains(schema.options._unique, key))
            or (type(schema.options._unique) == "string" and schema.options._unique == key) then
            sql = sql .. " UNIQUE"
         end
         sql_chunks[#sql_chunks+1] = sql
      end

      sql_chunks[#sql_chunks+1] = ")"

      local sql = table.concat(sql_chunks, "")
      db:echo_sql(sql)
      conn:execute(sql)

   else
      -- At this point we know that the sheet already exists, but we are concerned if the current
      -- definition includes columns which may be added.
      local sql_chunks = {}
      local sql_add = 'ALTER TABLE %s ADD COLUMN "%s" %s NULL DEFAULT %s'

      for k, v in pairs(schema.columns) do
         t = db:_sql_type(v)
         v = db:_sql_convert(v)

         -- Here we test it a given column exists in the sheet already, and if not, we add that
         -- column.
         if not current_columns[k] then
            local sql = sql_add:format(s_name, k, t, v)
            conn:execute(sql)
            db:echo_sql(sql)
         end
      end
   end

   -- On every invocation of db:create we run the code that creates indexes, as that code will
   -- do nothing if the specific indexes already exist. This is enforced by the db:_index_name
   -- function creating a unique index.
   --
   -- Note that in no situation will an existing index be deleted.

   -- make up current_columns, as pragma_info currently does not populate it, due to luasql bug
   for key, value in pairs(schema.columns) do
      current_columns[key] =  db:_sql_type(value)
   end

   db:_migrate_indexes(conn, s_name, schema, current_columns)
   db:echo_sql("COMMIT")
   conn:commit()
   conn:execute("VACUUM")
end

User avatar
Vadi
Posts: 5042
Joined: Sat Mar 14, 2009 3:13 pm

Re: Mudlet 2.1 Database Problem

Post by Vadi »

I still need to add this, but it's not forgotten. Just been terribly busy and when not, unmotivated from being busy :(

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

I know the feeling. Been pretty swamped myself recently, and I know that this is a large enough change that it needs some serious testing, and possibly some tweaking to account for special cases. I just hope it makes it into the next release so that I can start using databases for scripts I provide to other people.

Jor'Mox
Posts: 1146
Joined: Wed Apr 03, 2013 2:19 am

Re: Mudlet 2.1 Database Problem

Post by Jor'Mox »

So, I did some digging into the Timestamp issue I have been seeing. First, the database indicates that the datatype for the timestamp field is an integer, but the data actually being stored initially is a string with the timestamp (2013-07-16 16:19:41 is an example from one sample data set I generated). That data is then retrieved and transformed into an epoch value when the data is fetched. However, when that value is turned back into a user readable time, the hour is incorrect (for that example, it displays as 17:19:41, not 16:19:41). Also, I have to assume that those times are in GMT, since the timestamps were generated in the morning for me, as opposed to mid afternoon. When a row is retrieved and updated using the db:update function, the timestamp is overwritten as the calculated epoch value, instead of the string which it is originally assigned. Once it is overwritten, the row is no longer accessible via db:fetch (since it tries to parse the epoch value as if it were a date/time string). I would think that the best approach to fixing this issue would be storing the epoch value initially, with no parsing necessary except for display to the user.

However, since whatever code is involved in setting the timestamp when the new entry is added does not seem to be in DB.lua (the db:add sends the sql command directly, without any timestamp involved, so I have to assume the timestamp is generated either via sql itself, or something buried in the C++ code), it is probably easier to modify the db:_coerce_sheet function to handle timestamps written in either format.

This modified version of db:_coerce_sheet allows timestamps to work via the db:update function (both with the timestamp not being changed, and with it being set to a new value).
Code: [show] | [select all] lua
function db:_coerce_sheet(sheet, tbl)
   if tbl then
      tbl._row_id = tonumber(tbl._row_id)

      for k, v in pairs(tbl) do
         if k ~= "_row_id" then
            local field = sheet[k]
            if field.type == "number" then
               tbl[k] = tonumber(tbl[k]) or tbl[k]
            elseif field.type == "datetime" then
            	if tonumber(tbl[k]) then
            		tbl[k] = db:Timestamp(tonumber(tbl[k]), nil, true)
            	else
            		tbl[k] = db:Timestamp(datetime:parse(tbl[k], nil, true))
            	end
            end
         end
      end
      return tbl
   end
end
I did confirm that the "CURRENT_TIMESTAMP" is set as the current GMT time. Also, the one hour difference was due to daylight savings time being set as false, and adjusted for via the os.time function. So long as the only values stored as strings are the initial ones (which are always GMT), it should be fine, but if another value were stored as a string (which has no place in it to record daylight savings time values), it could give incorrect results. As is, I think that usage of timestamps should be documented to encourage providing the timestamp explicitly when adding a new entry (i.e. adding something like: time = db:Timestamp(getTime()) to the table given to the db:add function), rather than relying on the CURRENT_TIMESTAMP functionality provided currently, unless there is no desire to modify those values at a later time, and the user is comfortable with everything being several hours off depending on their location.

Post Reply